00:00:00.000 Started by upstream project "autotest-per-patch" build number 131993 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.064 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.065 The recommended git tool is: git 00:00:00.065 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.104 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.172 Using shallow fetch with depth 1 00:00:00.172 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.172 > git --version # timeout=10 00:00:00.237 > git --version # 'git version 2.39.2' 00:00:00.237 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.280 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.280 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.680 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.693 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.704 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:06.704 > git config core.sparsecheckout # timeout=10 00:00:06.718 > git read-tree -mu HEAD # timeout=10 00:00:06.734 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:06.759 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:06.760 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:06.844 [Pipeline] Start of Pipeline 00:00:06.853 [Pipeline] library 00:00:06.854 Loading library shm_lib@master 00:00:06.854 Library shm_lib@master is cached. Copying from home. 00:00:06.867 [Pipeline] node 00:00:06.876 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:06.877 [Pipeline] { 00:00:06.884 [Pipeline] catchError 00:00:06.885 [Pipeline] { 00:00:06.897 [Pipeline] wrap 00:00:06.905 [Pipeline] { 00:00:06.913 [Pipeline] stage 00:00:06.914 [Pipeline] { (Prologue) 00:00:06.929 [Pipeline] echo 00:00:06.931 Node: VM-host-SM38 00:00:06.937 [Pipeline] cleanWs 00:00:06.949 [WS-CLEANUP] Deleting project workspace... 00:00:06.949 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.957 [WS-CLEANUP] done 00:00:07.214 [Pipeline] setCustomBuildProperty 00:00:07.297 [Pipeline] httpRequest 00:00:07.646 [Pipeline] echo 00:00:07.648 Sorcerer 10.211.164.20 is alive 00:00:07.655 [Pipeline] retry 00:00:07.656 [Pipeline] { 00:00:07.667 [Pipeline] httpRequest 00:00:07.673 HttpMethod: GET 00:00:07.673 URL: http://10.211.164.20/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.674 Sending request to url: http://10.211.164.20/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:07.686 Response Code: HTTP/1.1 200 OK 00:00:07.687 Success: Status code 200 is in the accepted range: 200,404 00:00:07.687 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:20.049 [Pipeline] } 00:00:20.067 [Pipeline] // retry 00:00:20.075 [Pipeline] sh 00:00:20.363 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:20.379 [Pipeline] httpRequest 00:00:20.793 [Pipeline] echo 00:00:20.795 Sorcerer 10.211.164.20 is alive 00:00:20.805 [Pipeline] retry 00:00:20.807 [Pipeline] { 00:00:20.822 [Pipeline] httpRequest 00:00:20.828 HttpMethod: GET 00:00:20.828 URL: http://10.211.164.20/packages/spdk_3f50defdeb8f3eefb9d5db8d8edd4512747e3957.tar.gz 00:00:20.829 Sending request to url: http://10.211.164.20/packages/spdk_3f50defdeb8f3eefb9d5db8d8edd4512747e3957.tar.gz 00:00:20.851 Response Code: HTTP/1.1 200 OK 00:00:20.852 Success: Status code 200 is in the accepted range: 200,404 00:00:20.852 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_3f50defdeb8f3eefb9d5db8d8edd4512747e3957.tar.gz 00:01:28.344 [Pipeline] } 00:01:28.360 [Pipeline] // retry 00:01:28.367 [Pipeline] sh 00:01:28.652 + tar --no-same-owner -xf spdk_3f50defdeb8f3eefb9d5db8d8edd4512747e3957.tar.gz 00:01:32.077 [Pipeline] sh 00:01:32.362 + git -C spdk log --oneline -n5 00:01:32.362 3f50defde thread: Extended options for spdk_interrupt_register 00:01:32.362 28b353a57 nvme: interface to retrieve fd for a queue 00:01:32.362 58ae1bdd3 stdinc.h: move epoll header over here 00:01:32.362 458c5cd33 util: handle events for fd type eventfd 00:01:32.362 91e7a24c4 util: Extended options for spdk_fd_group_add 00:01:32.381 [Pipeline] writeFile 00:01:32.394 [Pipeline] sh 00:01:32.679 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:32.691 [Pipeline] sh 00:01:32.976 + cat autorun-spdk.conf 00:01:32.976 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.976 SPDK_TEST_NVME=1 00:01:32.976 SPDK_TEST_FTL=1 00:01:32.976 SPDK_TEST_ISAL=1 00:01:32.976 SPDK_RUN_ASAN=1 00:01:32.976 SPDK_RUN_UBSAN=1 00:01:32.976 SPDK_TEST_XNVME=1 00:01:32.976 SPDK_TEST_NVME_FDP=1 00:01:32.976 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.985 RUN_NIGHTLY=0 00:01:32.986 [Pipeline] } 00:01:33.001 [Pipeline] // stage 00:01:33.013 [Pipeline] stage 00:01:33.015 [Pipeline] { (Run VM) 00:01:33.026 [Pipeline] sh 00:01:33.368 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:33.368 + echo 'Start stage prepare_nvme.sh' 00:01:33.368 Start stage prepare_nvme.sh 00:01:33.368 + [[ -n 1 ]] 00:01:33.368 + disk_prefix=ex1 00:01:33.368 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:33.368 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:33.368 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:33.368 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:33.368 ++ SPDK_TEST_NVME=1 00:01:33.368 ++ SPDK_TEST_FTL=1 00:01:33.368 ++ SPDK_TEST_ISAL=1 00:01:33.368 ++ SPDK_RUN_ASAN=1 00:01:33.368 ++ SPDK_RUN_UBSAN=1 00:01:33.368 ++ SPDK_TEST_XNVME=1 00:01:33.368 ++ SPDK_TEST_NVME_FDP=1 00:01:33.368 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:33.368 ++ RUN_NIGHTLY=0 00:01:33.368 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:33.368 + nvme_files=() 00:01:33.368 + declare -A nvme_files 00:01:33.368 + backend_dir=/var/lib/libvirt/images/backends 00:01:33.368 + nvme_files['nvme.img']=5G 00:01:33.368 + nvme_files['nvme-cmb.img']=5G 00:01:33.368 + nvme_files['nvme-multi0.img']=4G 00:01:33.368 + nvme_files['nvme-multi1.img']=4G 00:01:33.368 + nvme_files['nvme-multi2.img']=4G 00:01:33.368 + nvme_files['nvme-openstack.img']=8G 00:01:33.368 + nvme_files['nvme-zns.img']=5G 00:01:33.368 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:33.368 + (( SPDK_TEST_FTL == 1 )) 00:01:33.368 + nvme_files["nvme-ftl.img"]=6G 00:01:33.368 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:33.368 + nvme_files["nvme-fdp.img"]=1G 00:01:33.368 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:33.368 + for nvme in "${!nvme_files[@]}" 00:01:33.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:33.369 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:33.369 + for nvme in "${!nvme_files[@]}" 00:01:33.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-ftl.img -s 6G 00:01:33.369 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:33.369 + for nvme in "${!nvme_files[@]}" 00:01:33.369 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:33.939 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.939 + for nvme in "${!nvme_files[@]}" 00:01:33.939 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:33.939 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:33.939 + for nvme in "${!nvme_files[@]}" 00:01:33.939 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:34.200 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:34.200 + for nvme in "${!nvme_files[@]}" 00:01:34.200 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:34.200 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.200 + for nvme in "${!nvme_files[@]}" 00:01:34.200 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:34.200 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:34.200 + for nvme in "${!nvme_files[@]}" 00:01:34.200 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-fdp.img -s 1G 00:01:34.200 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:34.200 + for nvme in "${!nvme_files[@]}" 00:01:34.200 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:35.142 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.142 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:35.142 + echo 'End stage prepare_nvme.sh' 00:01:35.142 End stage prepare_nvme.sh 00:01:35.155 [Pipeline] sh 00:01:35.440 + DISTRO=fedora39 00:01:35.440 + CPUS=10 00:01:35.440 + RAM=12288 00:01:35.440 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:35.440 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex1-nvme.img -b /var/lib/libvirt/images/backends/ex1-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex1-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:35.440 00:01:35.440 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:35.440 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:35.440 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:35.440 HELP=0 00:01:35.440 DRY_RUN=0 00:01:35.440 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,/var/lib/libvirt/images/backends/ex1-nvme.img,/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,/var/lib/libvirt/images/backends/ex1-nvme-fdp.img, 00:01:35.440 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:35.440 NVME_AUTO_CREATE=0 00:01:35.440 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex1-nvme-multi1.img:/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,, 00:01:35.440 NVME_CMB=,,,, 00:01:35.440 NVME_PMR=,,,, 00:01:35.440 NVME_ZNS=,,,, 00:01:35.440 NVME_MS=true,,,, 00:01:35.440 NVME_FDP=,,,on, 00:01:35.440 SPDK_VAGRANT_DISTRO=fedora39 00:01:35.440 SPDK_VAGRANT_VMCPU=10 00:01:35.440 SPDK_VAGRANT_VMRAM=12288 00:01:35.440 SPDK_VAGRANT_PROVIDER=libvirt 00:01:35.440 SPDK_VAGRANT_HTTP_PROXY= 00:01:35.440 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:35.440 SPDK_OPENSTACK_NETWORK=0 00:01:35.440 VAGRANT_PACKAGE_BOX=0 00:01:35.440 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:35.440 FORCE_DISTRO=true 00:01:35.440 VAGRANT_BOX_VERSION= 00:01:35.440 EXTRA_VAGRANTFILES= 00:01:35.440 NIC_MODEL=e1000 00:01:35.440 00:01:35.440 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:35.440 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:37.987 Bringing machine 'default' up with 'libvirt' provider... 00:01:38.559 ==> default: Creating image (snapshot of base box volume). 00:01:38.821 ==> default: Creating domain with the following settings... 00:01:38.821 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730714503_a5332a8258ff214994c5 00:01:38.821 ==> default: -- Domain type: kvm 00:01:38.821 ==> default: -- Cpus: 10 00:01:38.821 ==> default: -- Feature: acpi 00:01:38.821 ==> default: -- Feature: apic 00:01:38.821 ==> default: -- Feature: pae 00:01:38.821 ==> default: -- Memory: 12288M 00:01:38.821 ==> default: -- Memory Backing: hugepages: 00:01:38.821 ==> default: -- Management MAC: 00:01:38.821 ==> default: -- Loader: 00:01:38.821 ==> default: -- Nvram: 00:01:38.821 ==> default: -- Base box: spdk/fedora39 00:01:38.821 ==> default: -- Storage pool: default 00:01:38.821 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730714503_a5332a8258ff214994c5.img (20G) 00:01:38.821 ==> default: -- Volume Cache: default 00:01:38.821 ==> default: -- Kernel: 00:01:38.821 ==> default: -- Initrd: 00:01:38.821 ==> default: -- Graphics Type: vnc 00:01:38.821 ==> default: -- Graphics Port: -1 00:01:38.821 ==> default: -- Graphics IP: 127.0.0.1 00:01:38.821 ==> default: -- Graphics Password: Not defined 00:01:38.822 ==> default: -- Video Type: cirrus 00:01:38.822 ==> default: -- Video VRAM: 9216 00:01:38.822 ==> default: -- Sound Type: 00:01:38.822 ==> default: -- Keymap: en-us 00:01:38.822 ==> default: -- TPM Path: 00:01:38.822 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:38.822 ==> default: -- Command line args: 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-1-drive0, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:38.822 ==> default: -> value=-drive, 00:01:38.822 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:38.822 ==> default: -> value=-device, 00:01:38.822 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.084 ==> default: Creating shared folders metadata... 00:01:39.084 ==> default: Starting domain. 00:01:41.631 ==> default: Waiting for domain to get an IP address... 00:01:59.751 ==> default: Waiting for SSH to become available... 00:01:59.751 ==> default: Configuring and enabling network interfaces... 00:02:03.956 default: SSH address: 192.168.121.241:22 00:02:03.956 default: SSH username: vagrant 00:02:03.956 default: SSH auth method: private key 00:02:05.873 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:14.012 ==> default: Mounting SSHFS shared folder... 00:02:14.958 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:14.958 ==> default: Checking Mount.. 00:02:15.899 ==> default: Folder Successfully Mounted! 00:02:15.899 00:02:15.899 SUCCESS! 00:02:15.899 00:02:15.899 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:15.899 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:15.899 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:15.899 00:02:15.910 [Pipeline] } 00:02:15.925 [Pipeline] // stage 00:02:15.934 [Pipeline] dir 00:02:15.935 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:15.936 [Pipeline] { 00:02:15.948 [Pipeline] catchError 00:02:15.950 [Pipeline] { 00:02:15.960 [Pipeline] sh 00:02:16.307 + vagrant ssh-config --host vagrant 00:02:16.307 + sed -ne '/^Host/,$p' 00:02:16.307 + tee ssh_conf 00:02:18.857 Host vagrant 00:02:18.857 HostName 192.168.121.241 00:02:18.857 User vagrant 00:02:18.857 Port 22 00:02:18.857 UserKnownHostsFile /dev/null 00:02:18.857 StrictHostKeyChecking no 00:02:18.857 PasswordAuthentication no 00:02:18.857 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:18.857 IdentitiesOnly yes 00:02:18.857 LogLevel FATAL 00:02:18.857 ForwardAgent yes 00:02:18.857 ForwardX11 yes 00:02:18.857 00:02:18.876 [Pipeline] withEnv 00:02:18.878 [Pipeline] { 00:02:18.890 [Pipeline] sh 00:02:19.178 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:19.178 source /etc/os-release 00:02:19.178 [[ -e /image.version ]] && img=$(< /image.version) 00:02:19.178 # Minimal, systemd-like check. 00:02:19.178 if [[ -e /.dockerenv ]]; then 00:02:19.178 # Clear garbage from the node'\''s name: 00:02:19.178 # agt-er_autotest_547-896 -> autotest_547-896 00:02:19.178 # $HOSTNAME is the actual container id 00:02:19.178 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:19.178 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:19.178 # We can assume this is a mount from a host where container is running, 00:02:19.178 # so fetch its hostname to easily identify the target swarm worker. 00:02:19.178 container="$(< /etc/hostname) ($agent)" 00:02:19.178 else 00:02:19.178 # Fallback 00:02:19.178 container=$agent 00:02:19.178 fi 00:02:19.178 fi 00:02:19.178 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:19.178 ' 00:02:19.452 [Pipeline] } 00:02:19.467 [Pipeline] // withEnv 00:02:19.475 [Pipeline] setCustomBuildProperty 00:02:19.489 [Pipeline] stage 00:02:19.491 [Pipeline] { (Tests) 00:02:19.508 [Pipeline] sh 00:02:19.793 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:20.067 [Pipeline] sh 00:02:20.352 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:20.367 [Pipeline] timeout 00:02:20.368 Timeout set to expire in 50 min 00:02:20.369 [Pipeline] { 00:02:20.384 [Pipeline] sh 00:02:20.669 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:20.931 HEAD is now at 3f50defde thread: Extended options for spdk_interrupt_register 00:02:20.945 [Pipeline] sh 00:02:21.228 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:21.504 [Pipeline] sh 00:02:21.787 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:21.803 [Pipeline] sh 00:02:22.090 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:22.090 ++ readlink -f spdk_repo 00:02:22.090 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:22.090 + [[ -n /home/vagrant/spdk_repo ]] 00:02:22.090 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:22.090 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:22.090 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:22.090 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:22.090 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:22.090 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:22.090 + cd /home/vagrant/spdk_repo 00:02:22.090 + source /etc/os-release 00:02:22.090 ++ NAME='Fedora Linux' 00:02:22.090 ++ VERSION='39 (Cloud Edition)' 00:02:22.090 ++ ID=fedora 00:02:22.090 ++ VERSION_ID=39 00:02:22.090 ++ VERSION_CODENAME= 00:02:22.090 ++ PLATFORM_ID=platform:f39 00:02:22.090 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:22.090 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:22.090 ++ LOGO=fedora-logo-icon 00:02:22.090 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:22.090 ++ HOME_URL=https://fedoraproject.org/ 00:02:22.090 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:22.090 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:22.090 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:22.090 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:22.090 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:22.090 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:22.090 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:22.090 ++ SUPPORT_END=2024-11-12 00:02:22.090 ++ VARIANT='Cloud Edition' 00:02:22.090 ++ VARIANT_ID=cloud 00:02:22.090 + uname -a 00:02:22.090 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:22.090 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:22.664 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:22.664 Hugepages 00:02:22.664 node hugesize free / total 00:02:22.664 node0 1048576kB 0 / 0 00:02:22.664 node0 2048kB 0 / 0 00:02:22.664 00:02:22.664 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:22.925 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:22.925 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:22.925 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:22.925 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:22.925 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:22.925 + rm -f /tmp/spdk-ld-path 00:02:22.925 + source autorun-spdk.conf 00:02:22.925 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:22.925 ++ SPDK_TEST_NVME=1 00:02:22.925 ++ SPDK_TEST_FTL=1 00:02:22.925 ++ SPDK_TEST_ISAL=1 00:02:22.925 ++ SPDK_RUN_ASAN=1 00:02:22.925 ++ SPDK_RUN_UBSAN=1 00:02:22.925 ++ SPDK_TEST_XNVME=1 00:02:22.925 ++ SPDK_TEST_NVME_FDP=1 00:02:22.925 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:22.925 ++ RUN_NIGHTLY=0 00:02:22.925 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:22.925 + [[ -n '' ]] 00:02:22.925 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:22.925 + for M in /var/spdk/build-*-manifest.txt 00:02:22.925 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:22.925 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.925 + for M in /var/spdk/build-*-manifest.txt 00:02:22.925 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:22.925 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.925 + for M in /var/spdk/build-*-manifest.txt 00:02:22.925 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:22.925 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:22.925 ++ uname 00:02:22.925 + [[ Linux == \L\i\n\u\x ]] 00:02:22.925 + sudo dmesg -T 00:02:22.925 + sudo dmesg --clear 00:02:22.925 + dmesg_pid=5027 00:02:22.925 + [[ Fedora Linux == FreeBSD ]] 00:02:22.925 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.925 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:22.925 + sudo dmesg -Tw 00:02:22.925 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:22.925 + [[ -x /usr/src/fio-static/fio ]] 00:02:22.925 + export FIO_BIN=/usr/src/fio-static/fio 00:02:22.925 + FIO_BIN=/usr/src/fio-static/fio 00:02:22.925 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:22.925 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:22.925 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:22.925 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.925 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:22.925 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:22.925 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.925 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:22.925 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.187 10:02:28 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:23.187 10:02:28 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:23.187 10:02:28 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:23.187 10:02:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:23.187 10:02:28 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:23.187 10:02:28 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:23.187 10:02:28 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:23.187 10:02:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:23.187 10:02:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:23.187 10:02:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.187 10:02:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.188 10:02:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.188 10:02:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.188 10:02:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.188 10:02:28 -- paths/export.sh@5 -- $ export PATH 00:02:23.188 10:02:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.188 10:02:28 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:23.188 10:02:28 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:23.188 10:02:28 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730714548.XXXXXX 00:02:23.188 10:02:28 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730714548.93CdL7 00:02:23.188 10:02:28 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:23.188 10:02:28 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:23.188 10:02:28 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:23.188 10:02:28 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:23.188 10:02:28 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:23.188 10:02:28 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:23.188 10:02:28 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:23.188 10:02:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.188 10:02:28 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:23.188 10:02:28 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:23.188 10:02:28 -- pm/common@17 -- $ local monitor 00:02:23.188 10:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.188 10:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.188 10:02:28 -- pm/common@25 -- $ sleep 1 00:02:23.188 10:02:28 -- pm/common@21 -- $ date +%s 00:02:23.188 10:02:28 -- pm/common@21 -- $ date +%s 00:02:23.188 10:02:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730714548 00:02:23.188 10:02:28 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730714548 00:02:23.188 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730714548_collect-cpu-load.pm.log 00:02:23.188 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730714548_collect-vmstat.pm.log 00:02:24.132 10:02:29 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:24.132 10:02:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:24.132 10:02:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:24.132 10:02:29 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:24.132 10:02:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:24.132 Mon Nov 4 10:02:29 AM UTC 2024 00:02:24.132 10:02:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:24.132 v25.01-pre-137-g3f50defde 00:02:24.132 10:02:29 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:24.132 10:02:29 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:24.132 10:02:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:24.132 10:02:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:24.132 10:02:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.132 ************************************ 00:02:24.132 START TEST asan 00:02:24.132 ************************************ 00:02:24.132 using asan 00:02:24.132 10:02:29 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:24.132 00:02:24.132 real 0m0.000s 00:02:24.132 user 0m0.000s 00:02:24.132 sys 0m0.000s 00:02:24.132 10:02:29 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:24.132 ************************************ 00:02:24.132 END TEST asan 00:02:24.132 ************************************ 00:02:24.132 10:02:29 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.392 10:02:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:24.392 10:02:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:24.392 10:02:29 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:24.392 10:02:29 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:24.392 10:02:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:24.392 ************************************ 00:02:24.392 START TEST ubsan 00:02:24.392 ************************************ 00:02:24.392 using ubsan 00:02:24.392 10:02:29 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:24.392 00:02:24.392 real 0m0.000s 00:02:24.392 user 0m0.000s 00:02:24.392 sys 0m0.000s 00:02:24.392 10:02:29 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:24.392 ************************************ 00:02:24.392 END TEST ubsan 00:02:24.392 10:02:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:24.392 ************************************ 00:02:24.392 10:02:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:24.392 10:02:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:24.392 10:02:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:24.392 10:02:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:24.392 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:24.392 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:24.652 Using 'verbs' RDMA provider 00:02:35.590 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:45.605 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:45.605 Creating mk/config.mk...done. 00:02:45.605 Creating mk/cc.flags.mk...done. 00:02:45.605 Type 'make' to build. 00:02:45.605 10:02:51 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:45.606 10:02:51 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:45.606 10:02:51 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:45.606 10:02:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.606 ************************************ 00:02:45.606 START TEST make 00:02:45.606 ************************************ 00:02:45.606 10:02:51 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:45.864 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:45.864 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:45.864 meson setup builddir \ 00:02:45.864 -Dwith-libaio=enabled \ 00:02:45.864 -Dwith-liburing=enabled \ 00:02:45.864 -Dwith-libvfn=disabled \ 00:02:45.864 -Dwith-spdk=disabled \ 00:02:45.864 -Dexamples=false \ 00:02:45.864 -Dtests=false \ 00:02:45.864 -Dtools=false && \ 00:02:45.864 meson compile -C builddir && \ 00:02:45.864 cd -) 00:02:45.864 make[1]: Nothing to be done for 'all'. 00:02:48.420 The Meson build system 00:02:48.420 Version: 1.5.0 00:02:48.420 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:48.420 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:48.420 Build type: native build 00:02:48.420 Project name: xnvme 00:02:48.420 Project version: 0.7.5 00:02:48.420 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:48.420 C linker for the host machine: cc ld.bfd 2.40-14 00:02:48.420 Host machine cpu family: x86_64 00:02:48.420 Host machine cpu: x86_64 00:02:48.420 Message: host_machine.system: linux 00:02:48.420 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:48.420 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:48.420 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:48.420 Run-time dependency threads found: YES 00:02:48.420 Has header "setupapi.h" : NO 00:02:48.420 Has header "linux/blkzoned.h" : YES 00:02:48.420 Has header "linux/blkzoned.h" : YES (cached) 00:02:48.420 Has header "libaio.h" : YES 00:02:48.420 Library aio found: YES 00:02:48.420 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:48.420 Run-time dependency liburing found: YES 2.2 00:02:48.420 Dependency libvfn skipped: feature with-libvfn disabled 00:02:48.420 Found CMake: /usr/bin/cmake (3.27.7) 00:02:48.420 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:48.420 Subproject spdk : skipped: feature with-spdk disabled 00:02:48.420 Run-time dependency appleframeworks found: NO (tried framework) 00:02:48.420 Run-time dependency appleframeworks found: NO (tried framework) 00:02:48.420 Library rt found: YES 00:02:48.420 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:48.420 Configuring xnvme_config.h using configuration 00:02:48.420 Configuring xnvme.spec using configuration 00:02:48.420 Run-time dependency bash-completion found: YES 2.11 00:02:48.420 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:48.420 Program cp found: YES (/usr/bin/cp) 00:02:48.420 Build targets in project: 3 00:02:48.420 00:02:48.420 xnvme 0.7.5 00:02:48.420 00:02:48.420 Subprojects 00:02:48.420 spdk : NO Feature 'with-spdk' disabled 00:02:48.420 00:02:48.420 User defined options 00:02:48.420 examples : false 00:02:48.420 tests : false 00:02:48.420 tools : false 00:02:48.420 with-libaio : enabled 00:02:48.420 with-liburing: enabled 00:02:48.420 with-libvfn : disabled 00:02:48.420 with-spdk : disabled 00:02:48.420 00:02:48.420 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:48.420 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:48.420 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:48.420 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:48.420 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:48.420 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:48.420 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:48.420 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:48.420 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:48.420 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:48.420 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:48.420 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:48.420 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:48.420 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:48.679 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:48.679 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:48.679 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:48.679 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:48.679 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:48.679 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:48.679 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:48.679 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:48.679 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:48.679 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:48.679 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:48.679 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:48.679 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:48.679 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:48.679 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:48.679 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:48.679 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:48.679 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:48.679 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:48.679 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:48.679 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:48.679 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:48.679 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:48.679 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:48.679 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:48.679 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:48.679 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:48.679 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:48.679 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:48.679 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:48.679 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:48.679 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:48.679 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:48.679 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:48.679 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:48.679 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:48.679 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:48.679 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:48.679 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:48.679 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:48.679 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:48.940 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:48.940 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:48.940 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:48.940 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:48.940 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:48.940 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:48.940 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:48.940 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:48.940 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:48.940 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:48.940 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:48.940 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:48.940 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:48.940 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:48.940 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:48.940 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:48.940 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:48.940 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:48.940 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:49.199 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:49.458 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:49.458 [75/76] Linking static target lib/libxnvme.a 00:02:49.458 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:49.458 INFO: autodetecting backend as ninja 00:02:49.458 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:49.458 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:56.089 The Meson build system 00:02:56.089 Version: 1.5.0 00:02:56.089 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:56.089 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:56.089 Build type: native build 00:02:56.089 Program cat found: YES (/usr/bin/cat) 00:02:56.089 Project name: DPDK 00:02:56.089 Project version: 24.03.0 00:02:56.089 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:56.089 C linker for the host machine: cc ld.bfd 2.40-14 00:02:56.089 Host machine cpu family: x86_64 00:02:56.089 Host machine cpu: x86_64 00:02:56.089 Message: ## Building in Developer Mode ## 00:02:56.089 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:56.089 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:56.089 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:56.089 Program python3 found: YES (/usr/bin/python3) 00:02:56.089 Program cat found: YES (/usr/bin/cat) 00:02:56.089 Compiler for C supports arguments -march=native: YES 00:02:56.089 Checking for size of "void *" : 8 00:02:56.089 Checking for size of "void *" : 8 (cached) 00:02:56.089 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:56.089 Library m found: YES 00:02:56.089 Library numa found: YES 00:02:56.089 Has header "numaif.h" : YES 00:02:56.089 Library fdt found: NO 00:02:56.089 Library execinfo found: NO 00:02:56.089 Has header "execinfo.h" : YES 00:02:56.089 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:56.089 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:56.089 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:56.089 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:56.089 Run-time dependency openssl found: YES 3.1.1 00:02:56.089 Run-time dependency libpcap found: YES 1.10.4 00:02:56.089 Has header "pcap.h" with dependency libpcap: YES 00:02:56.089 Compiler for C supports arguments -Wcast-qual: YES 00:02:56.089 Compiler for C supports arguments -Wdeprecated: YES 00:02:56.089 Compiler for C supports arguments -Wformat: YES 00:02:56.089 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:56.089 Compiler for C supports arguments -Wformat-security: NO 00:02:56.089 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:56.089 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:56.089 Compiler for C supports arguments -Wnested-externs: YES 00:02:56.089 Compiler for C supports arguments -Wold-style-definition: YES 00:02:56.089 Compiler for C supports arguments -Wpointer-arith: YES 00:02:56.089 Compiler for C supports arguments -Wsign-compare: YES 00:02:56.089 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:56.089 Compiler for C supports arguments -Wundef: YES 00:02:56.089 Compiler for C supports arguments -Wwrite-strings: YES 00:02:56.089 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:56.089 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:56.089 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:56.089 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:56.089 Program objdump found: YES (/usr/bin/objdump) 00:02:56.089 Compiler for C supports arguments -mavx512f: YES 00:02:56.089 Checking if "AVX512 checking" compiles: YES 00:02:56.089 Fetching value of define "__SSE4_2__" : 1 00:02:56.089 Fetching value of define "__AES__" : 1 00:02:56.089 Fetching value of define "__AVX__" : 1 00:02:56.089 Fetching value of define "__AVX2__" : 1 00:02:56.089 Fetching value of define "__AVX512BW__" : 1 00:02:56.089 Fetching value of define "__AVX512CD__" : 1 00:02:56.089 Fetching value of define "__AVX512DQ__" : 1 00:02:56.089 Fetching value of define "__AVX512F__" : 1 00:02:56.089 Fetching value of define "__AVX512VL__" : 1 00:02:56.089 Fetching value of define "__PCLMUL__" : 1 00:02:56.089 Fetching value of define "__RDRND__" : 1 00:02:56.089 Fetching value of define "__RDSEED__" : 1 00:02:56.089 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:56.089 Fetching value of define "__znver1__" : (undefined) 00:02:56.089 Fetching value of define "__znver2__" : (undefined) 00:02:56.089 Fetching value of define "__znver3__" : (undefined) 00:02:56.089 Fetching value of define "__znver4__" : (undefined) 00:02:56.089 Library asan found: YES 00:02:56.089 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:56.089 Message: lib/log: Defining dependency "log" 00:02:56.089 Message: lib/kvargs: Defining dependency "kvargs" 00:02:56.089 Message: lib/telemetry: Defining dependency "telemetry" 00:02:56.089 Library rt found: YES 00:02:56.089 Checking for function "getentropy" : NO 00:02:56.089 Message: lib/eal: Defining dependency "eal" 00:02:56.089 Message: lib/ring: Defining dependency "ring" 00:02:56.089 Message: lib/rcu: Defining dependency "rcu" 00:02:56.089 Message: lib/mempool: Defining dependency "mempool" 00:02:56.089 Message: lib/mbuf: Defining dependency "mbuf" 00:02:56.089 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:56.089 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:56.089 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:56.089 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:56.089 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:56.089 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:56.089 Compiler for C supports arguments -mpclmul: YES 00:02:56.089 Compiler for C supports arguments -maes: YES 00:02:56.089 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:56.089 Compiler for C supports arguments -mavx512bw: YES 00:02:56.089 Compiler for C supports arguments -mavx512dq: YES 00:02:56.089 Compiler for C supports arguments -mavx512vl: YES 00:02:56.089 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:56.089 Compiler for C supports arguments -mavx2: YES 00:02:56.089 Compiler for C supports arguments -mavx: YES 00:02:56.089 Message: lib/net: Defining dependency "net" 00:02:56.089 Message: lib/meter: Defining dependency "meter" 00:02:56.089 Message: lib/ethdev: Defining dependency "ethdev" 00:02:56.089 Message: lib/pci: Defining dependency "pci" 00:02:56.089 Message: lib/cmdline: Defining dependency "cmdline" 00:02:56.089 Message: lib/hash: Defining dependency "hash" 00:02:56.089 Message: lib/timer: Defining dependency "timer" 00:02:56.089 Message: lib/compressdev: Defining dependency "compressdev" 00:02:56.089 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:56.089 Message: lib/dmadev: Defining dependency "dmadev" 00:02:56.089 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:56.089 Message: lib/power: Defining dependency "power" 00:02:56.089 Message: lib/reorder: Defining dependency "reorder" 00:02:56.089 Message: lib/security: Defining dependency "security" 00:02:56.089 Has header "linux/userfaultfd.h" : YES 00:02:56.089 Has header "linux/vduse.h" : YES 00:02:56.089 Message: lib/vhost: Defining dependency "vhost" 00:02:56.089 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:56.089 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:56.089 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:56.089 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:56.089 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:56.089 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:56.089 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:56.089 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:56.089 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:56.089 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:56.089 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:56.090 Configuring doxy-api-html.conf using configuration 00:02:56.090 Configuring doxy-api-man.conf using configuration 00:02:56.090 Program mandb found: YES (/usr/bin/mandb) 00:02:56.090 Program sphinx-build found: NO 00:02:56.090 Configuring rte_build_config.h using configuration 00:02:56.090 Message: 00:02:56.090 ================= 00:02:56.090 Applications Enabled 00:02:56.090 ================= 00:02:56.090 00:02:56.090 apps: 00:02:56.090 00:02:56.090 00:02:56.090 Message: 00:02:56.090 ================= 00:02:56.090 Libraries Enabled 00:02:56.090 ================= 00:02:56.090 00:02:56.090 libs: 00:02:56.090 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:56.090 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:56.090 cryptodev, dmadev, power, reorder, security, vhost, 00:02:56.090 00:02:56.090 Message: 00:02:56.090 =============== 00:02:56.090 Drivers Enabled 00:02:56.090 =============== 00:02:56.090 00:02:56.090 common: 00:02:56.090 00:02:56.090 bus: 00:02:56.090 pci, vdev, 00:02:56.090 mempool: 00:02:56.090 ring, 00:02:56.090 dma: 00:02:56.090 00:02:56.090 net: 00:02:56.090 00:02:56.090 crypto: 00:02:56.090 00:02:56.090 compress: 00:02:56.090 00:02:56.090 vdpa: 00:02:56.090 00:02:56.090 00:02:56.090 Message: 00:02:56.090 ================= 00:02:56.090 Content Skipped 00:02:56.090 ================= 00:02:56.090 00:02:56.090 apps: 00:02:56.090 dumpcap: explicitly disabled via build config 00:02:56.090 graph: explicitly disabled via build config 00:02:56.090 pdump: explicitly disabled via build config 00:02:56.090 proc-info: explicitly disabled via build config 00:02:56.090 test-acl: explicitly disabled via build config 00:02:56.090 test-bbdev: explicitly disabled via build config 00:02:56.090 test-cmdline: explicitly disabled via build config 00:02:56.090 test-compress-perf: explicitly disabled via build config 00:02:56.090 test-crypto-perf: explicitly disabled via build config 00:02:56.090 test-dma-perf: explicitly disabled via build config 00:02:56.090 test-eventdev: explicitly disabled via build config 00:02:56.090 test-fib: explicitly disabled via build config 00:02:56.090 test-flow-perf: explicitly disabled via build config 00:02:56.090 test-gpudev: explicitly disabled via build config 00:02:56.090 test-mldev: explicitly disabled via build config 00:02:56.090 test-pipeline: explicitly disabled via build config 00:02:56.090 test-pmd: explicitly disabled via build config 00:02:56.090 test-regex: explicitly disabled via build config 00:02:56.090 test-sad: explicitly disabled via build config 00:02:56.090 test-security-perf: explicitly disabled via build config 00:02:56.090 00:02:56.090 libs: 00:02:56.090 argparse: explicitly disabled via build config 00:02:56.090 metrics: explicitly disabled via build config 00:02:56.090 acl: explicitly disabled via build config 00:02:56.090 bbdev: explicitly disabled via build config 00:02:56.090 bitratestats: explicitly disabled via build config 00:02:56.090 bpf: explicitly disabled via build config 00:02:56.090 cfgfile: explicitly disabled via build config 00:02:56.090 distributor: explicitly disabled via build config 00:02:56.090 efd: explicitly disabled via build config 00:02:56.090 eventdev: explicitly disabled via build config 00:02:56.090 dispatcher: explicitly disabled via build config 00:02:56.090 gpudev: explicitly disabled via build config 00:02:56.090 gro: explicitly disabled via build config 00:02:56.090 gso: explicitly disabled via build config 00:02:56.090 ip_frag: explicitly disabled via build config 00:02:56.090 jobstats: explicitly disabled via build config 00:02:56.090 latencystats: explicitly disabled via build config 00:02:56.090 lpm: explicitly disabled via build config 00:02:56.090 member: explicitly disabled via build config 00:02:56.090 pcapng: explicitly disabled via build config 00:02:56.090 rawdev: explicitly disabled via build config 00:02:56.090 regexdev: explicitly disabled via build config 00:02:56.090 mldev: explicitly disabled via build config 00:02:56.090 rib: explicitly disabled via build config 00:02:56.090 sched: explicitly disabled via build config 00:02:56.090 stack: explicitly disabled via build config 00:02:56.090 ipsec: explicitly disabled via build config 00:02:56.090 pdcp: explicitly disabled via build config 00:02:56.090 fib: explicitly disabled via build config 00:02:56.090 port: explicitly disabled via build config 00:02:56.090 pdump: explicitly disabled via build config 00:02:56.090 table: explicitly disabled via build config 00:02:56.090 pipeline: explicitly disabled via build config 00:02:56.090 graph: explicitly disabled via build config 00:02:56.090 node: explicitly disabled via build config 00:02:56.090 00:02:56.090 drivers: 00:02:56.090 common/cpt: not in enabled drivers build config 00:02:56.090 common/dpaax: not in enabled drivers build config 00:02:56.090 common/iavf: not in enabled drivers build config 00:02:56.090 common/idpf: not in enabled drivers build config 00:02:56.090 common/ionic: not in enabled drivers build config 00:02:56.090 common/mvep: not in enabled drivers build config 00:02:56.090 common/octeontx: not in enabled drivers build config 00:02:56.090 bus/auxiliary: not in enabled drivers build config 00:02:56.090 bus/cdx: not in enabled drivers build config 00:02:56.090 bus/dpaa: not in enabled drivers build config 00:02:56.090 bus/fslmc: not in enabled drivers build config 00:02:56.090 bus/ifpga: not in enabled drivers build config 00:02:56.090 bus/platform: not in enabled drivers build config 00:02:56.090 bus/uacce: not in enabled drivers build config 00:02:56.090 bus/vmbus: not in enabled drivers build config 00:02:56.090 common/cnxk: not in enabled drivers build config 00:02:56.090 common/mlx5: not in enabled drivers build config 00:02:56.090 common/nfp: not in enabled drivers build config 00:02:56.090 common/nitrox: not in enabled drivers build config 00:02:56.090 common/qat: not in enabled drivers build config 00:02:56.090 common/sfc_efx: not in enabled drivers build config 00:02:56.090 mempool/bucket: not in enabled drivers build config 00:02:56.090 mempool/cnxk: not in enabled drivers build config 00:02:56.090 mempool/dpaa: not in enabled drivers build config 00:02:56.090 mempool/dpaa2: not in enabled drivers build config 00:02:56.090 mempool/octeontx: not in enabled drivers build config 00:02:56.090 mempool/stack: not in enabled drivers build config 00:02:56.090 dma/cnxk: not in enabled drivers build config 00:02:56.090 dma/dpaa: not in enabled drivers build config 00:02:56.090 dma/dpaa2: not in enabled drivers build config 00:02:56.090 dma/hisilicon: not in enabled drivers build config 00:02:56.090 dma/idxd: not in enabled drivers build config 00:02:56.090 dma/ioat: not in enabled drivers build config 00:02:56.090 dma/skeleton: not in enabled drivers build config 00:02:56.090 net/af_packet: not in enabled drivers build config 00:02:56.090 net/af_xdp: not in enabled drivers build config 00:02:56.090 net/ark: not in enabled drivers build config 00:02:56.090 net/atlantic: not in enabled drivers build config 00:02:56.090 net/avp: not in enabled drivers build config 00:02:56.090 net/axgbe: not in enabled drivers build config 00:02:56.090 net/bnx2x: not in enabled drivers build config 00:02:56.090 net/bnxt: not in enabled drivers build config 00:02:56.090 net/bonding: not in enabled drivers build config 00:02:56.090 net/cnxk: not in enabled drivers build config 00:02:56.090 net/cpfl: not in enabled drivers build config 00:02:56.090 net/cxgbe: not in enabled drivers build config 00:02:56.090 net/dpaa: not in enabled drivers build config 00:02:56.090 net/dpaa2: not in enabled drivers build config 00:02:56.090 net/e1000: not in enabled drivers build config 00:02:56.090 net/ena: not in enabled drivers build config 00:02:56.090 net/enetc: not in enabled drivers build config 00:02:56.090 net/enetfec: not in enabled drivers build config 00:02:56.090 net/enic: not in enabled drivers build config 00:02:56.090 net/failsafe: not in enabled drivers build config 00:02:56.090 net/fm10k: not in enabled drivers build config 00:02:56.091 net/gve: not in enabled drivers build config 00:02:56.091 net/hinic: not in enabled drivers build config 00:02:56.091 net/hns3: not in enabled drivers build config 00:02:56.091 net/i40e: not in enabled drivers build config 00:02:56.091 net/iavf: not in enabled drivers build config 00:02:56.091 net/ice: not in enabled drivers build config 00:02:56.091 net/idpf: not in enabled drivers build config 00:02:56.091 net/igc: not in enabled drivers build config 00:02:56.091 net/ionic: not in enabled drivers build config 00:02:56.091 net/ipn3ke: not in enabled drivers build config 00:02:56.091 net/ixgbe: not in enabled drivers build config 00:02:56.091 net/mana: not in enabled drivers build config 00:02:56.091 net/memif: not in enabled drivers build config 00:02:56.091 net/mlx4: not in enabled drivers build config 00:02:56.091 net/mlx5: not in enabled drivers build config 00:02:56.091 net/mvneta: not in enabled drivers build config 00:02:56.091 net/mvpp2: not in enabled drivers build config 00:02:56.091 net/netvsc: not in enabled drivers build config 00:02:56.091 net/nfb: not in enabled drivers build config 00:02:56.091 net/nfp: not in enabled drivers build config 00:02:56.091 net/ngbe: not in enabled drivers build config 00:02:56.091 net/null: not in enabled drivers build config 00:02:56.091 net/octeontx: not in enabled drivers build config 00:02:56.091 net/octeon_ep: not in enabled drivers build config 00:02:56.091 net/pcap: not in enabled drivers build config 00:02:56.091 net/pfe: not in enabled drivers build config 00:02:56.091 net/qede: not in enabled drivers build config 00:02:56.091 net/ring: not in enabled drivers build config 00:02:56.091 net/sfc: not in enabled drivers build config 00:02:56.091 net/softnic: not in enabled drivers build config 00:02:56.091 net/tap: not in enabled drivers build config 00:02:56.091 net/thunderx: not in enabled drivers build config 00:02:56.091 net/txgbe: not in enabled drivers build config 00:02:56.091 net/vdev_netvsc: not in enabled drivers build config 00:02:56.091 net/vhost: not in enabled drivers build config 00:02:56.091 net/virtio: not in enabled drivers build config 00:02:56.091 net/vmxnet3: not in enabled drivers build config 00:02:56.091 raw/*: missing internal dependency, "rawdev" 00:02:56.091 crypto/armv8: not in enabled drivers build config 00:02:56.091 crypto/bcmfs: not in enabled drivers build config 00:02:56.091 crypto/caam_jr: not in enabled drivers build config 00:02:56.091 crypto/ccp: not in enabled drivers build config 00:02:56.091 crypto/cnxk: not in enabled drivers build config 00:02:56.091 crypto/dpaa_sec: not in enabled drivers build config 00:02:56.091 crypto/dpaa2_sec: not in enabled drivers build config 00:02:56.091 crypto/ipsec_mb: not in enabled drivers build config 00:02:56.091 crypto/mlx5: not in enabled drivers build config 00:02:56.091 crypto/mvsam: not in enabled drivers build config 00:02:56.091 crypto/nitrox: not in enabled drivers build config 00:02:56.091 crypto/null: not in enabled drivers build config 00:02:56.091 crypto/octeontx: not in enabled drivers build config 00:02:56.091 crypto/openssl: not in enabled drivers build config 00:02:56.091 crypto/scheduler: not in enabled drivers build config 00:02:56.091 crypto/uadk: not in enabled drivers build config 00:02:56.091 crypto/virtio: not in enabled drivers build config 00:02:56.091 compress/isal: not in enabled drivers build config 00:02:56.091 compress/mlx5: not in enabled drivers build config 00:02:56.091 compress/nitrox: not in enabled drivers build config 00:02:56.091 compress/octeontx: not in enabled drivers build config 00:02:56.091 compress/zlib: not in enabled drivers build config 00:02:56.091 regex/*: missing internal dependency, "regexdev" 00:02:56.091 ml/*: missing internal dependency, "mldev" 00:02:56.091 vdpa/ifc: not in enabled drivers build config 00:02:56.091 vdpa/mlx5: not in enabled drivers build config 00:02:56.091 vdpa/nfp: not in enabled drivers build config 00:02:56.091 vdpa/sfc: not in enabled drivers build config 00:02:56.091 event/*: missing internal dependency, "eventdev" 00:02:56.091 baseband/*: missing internal dependency, "bbdev" 00:02:56.091 gpu/*: missing internal dependency, "gpudev" 00:02:56.091 00:02:56.091 00:02:56.091 Build targets in project: 84 00:02:56.091 00:02:56.091 DPDK 24.03.0 00:02:56.091 00:02:56.091 User defined options 00:02:56.091 buildtype : debug 00:02:56.091 default_library : shared 00:02:56.091 libdir : lib 00:02:56.091 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:56.091 b_sanitize : address 00:02:56.091 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:56.091 c_link_args : 00:02:56.091 cpu_instruction_set: native 00:02:56.091 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:56.091 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:56.091 enable_docs : false 00:02:56.091 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:56.091 enable_kmods : false 00:02:56.091 max_lcores : 128 00:02:56.091 tests : false 00:02:56.091 00:02:56.091 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:56.091 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:56.091 [1/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:56.091 [2/267] Linking static target lib/librte_kvargs.a 00:02:56.091 [3/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:56.091 [4/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:56.091 [5/267] Linking static target lib/librte_log.a 00:02:56.091 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:56.350 [7/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:56.350 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:56.350 [9/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.350 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:56.350 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:56.350 [12/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:56.350 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:56.350 [14/267] Linking static target lib/librte_telemetry.a 00:02:56.350 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:56.350 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:56.608 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:56.608 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:56.608 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:56.609 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:56.866 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:56.866 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:56.866 [23/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.866 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:56.866 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:56.866 [26/267] Linking target lib/librte_log.so.24.1 00:02:56.866 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:56.866 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:57.124 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:57.124 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:57.124 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:57.124 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:57.124 [33/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.124 [34/267] Linking target lib/librte_kvargs.so.24.1 00:02:57.124 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:57.382 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:57.382 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:57.382 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:57.382 [39/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:57.382 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:57.382 [41/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:57.382 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:57.382 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:57.382 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:57.382 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:57.382 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:57.640 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:57.640 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:57.640 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:57.640 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:57.640 [51/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:57.898 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:57.898 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:57.898 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:57.898 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:57.898 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:57.898 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:58.156 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:58.156 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:58.156 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:58.156 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:58.156 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:58.156 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:58.156 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:58.415 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:58.415 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:58.415 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:58.415 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:58.676 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:58.676 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:58.676 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:58.676 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:58.676 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:58.676 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:58.676 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:58.676 [76/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:58.676 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:58.676 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:58.934 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:58.934 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:58.934 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:58.934 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:59.192 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:59.192 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:59.192 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:59.192 [86/267] Linking static target lib/librte_ring.a 00:02:59.192 [87/267] Linking static target lib/librte_eal.a 00:02:59.192 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:59.192 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:59.450 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:59.450 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:59.450 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:59.708 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:59.708 [94/267] Linking static target lib/librte_mempool.a 00:02:59.708 [95/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:59.708 [96/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:59.708 [97/267] Linking static target lib/librte_rcu.a 00:02:59.708 [98/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.708 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:59.966 [100/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:59.966 [101/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:59.966 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:59.966 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:59.966 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.966 [105/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:00.224 [106/267] Linking static target lib/librte_meter.a 00:03:00.224 [107/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:00.224 [108/267] Linking static target lib/librte_mbuf.a 00:03:00.225 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:00.225 [110/267] Linking static target lib/librte_net.a 00:03:00.225 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:00.225 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:00.225 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:00.483 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.483 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:00.483 [116/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.483 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.483 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:00.740 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:00.740 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:00.740 [121/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:00.998 [122/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.998 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:00.998 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:00.998 [125/267] Linking static target lib/librte_pci.a 00:03:00.998 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:00.998 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:01.255 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:01.255 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:01.255 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:01.255 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:01.255 [132/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:01.255 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:01.255 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.512 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:01.512 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:01.512 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:01.512 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:01.512 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:01.512 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:01.512 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:01.512 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:01.512 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:01.512 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:01.512 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:01.512 [146/267] Linking static target lib/librte_cmdline.a 00:03:01.771 [147/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:01.771 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:01.771 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:01.771 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:01.771 [151/267] Linking static target lib/librte_timer.a 00:03:01.771 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:02.029 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:02.029 [154/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:02.287 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:02.287 [156/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.287 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:02.545 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:02.545 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:02.545 [160/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:02.545 [161/267] Linking static target lib/librte_compressdev.a 00:03:02.545 [162/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:02.545 [163/267] Linking static target lib/librte_hash.a 00:03:02.545 [164/267] Linking static target lib/librte_ethdev.a 00:03:02.545 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:02.545 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:02.545 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:02.545 [168/267] Linking static target lib/librte_dmadev.a 00:03:02.802 [169/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:02.802 [170/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:02.802 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:02.803 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:02.803 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.061 [174/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:03.061 [175/267] Linking static target lib/librte_cryptodev.a 00:03:03.061 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:03.061 [177/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:03.061 [178/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:03.061 [179/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.061 [180/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:03.318 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:03.318 [182/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.318 [183/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.318 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:03.318 [185/267] Linking static target lib/librte_power.a 00:03:03.576 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:03.576 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:03.576 [188/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:03.576 [189/267] Linking static target lib/librte_security.a 00:03:03.834 [190/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:03.834 [191/267] Linking static target lib/librte_reorder.a 00:03:03.834 [192/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:03.834 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:04.091 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.091 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.348 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.348 [197/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:04.348 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:04.348 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:04.606 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:04.606 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:04.606 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:04.606 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:04.606 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:04.863 [205/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.863 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:04.863 [207/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:04.863 [208/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:04.863 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:04.863 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.121 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.121 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.121 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.121 [214/267] Linking static target drivers/librte_bus_vdev.a 00:03:05.121 [215/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.121 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.121 [217/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:05.121 [218/267] Linking static target drivers/librte_bus_pci.a 00:03:05.121 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:05.121 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:05.380 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.380 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:05.380 [223/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.380 [224/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:05.380 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:05.380 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.959 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:06.524 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.524 [229/267] Linking target lib/librte_eal.so.24.1 00:03:06.782 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:06.782 [231/267] Linking target lib/librte_timer.so.24.1 00:03:06.782 [232/267] Linking target lib/librte_pci.so.24.1 00:03:06.782 [233/267] Linking target lib/librte_ring.so.24.1 00:03:06.782 [234/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:06.782 [235/267] Linking target lib/librte_meter.so.24.1 00:03:06.782 [236/267] Linking target lib/librte_dmadev.so.24.1 00:03:06.782 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:06.782 [238/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:06.782 [239/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:06.782 [240/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:06.782 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:06.782 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:06.782 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:06.782 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:07.040 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:07.040 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:07.040 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:07.040 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:07.040 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:07.298 [250/267] Linking target lib/librte_compressdev.so.24.1 00:03:07.298 [251/267] Linking target lib/librte_net.so.24.1 00:03:07.298 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:07.298 [253/267] Linking target lib/librte_reorder.so.24.1 00:03:07.298 [254/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:07.298 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:07.298 [256/267] Linking target lib/librte_cmdline.so.24.1 00:03:07.298 [257/267] Linking target lib/librte_hash.so.24.1 00:03:07.298 [258/267] Linking target lib/librte_security.so.24.1 00:03:07.298 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:07.556 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:07.815 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:07.815 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:07.815 [263/267] Linking target lib/librte_power.so.24.1 00:03:08.749 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:08.749 [265/267] Linking static target lib/librte_vhost.a 00:03:10.163 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.163 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:10.163 INFO: autodetecting backend as ninja 00:03:10.163 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:25.041 CC lib/ut/ut.o 00:03:25.041 CC lib/ut_mock/mock.o 00:03:25.041 CC lib/log/log_flags.o 00:03:25.041 CC lib/log/log.o 00:03:25.041 CC lib/log/log_deprecated.o 00:03:25.041 LIB libspdk_ut_mock.a 00:03:25.041 LIB libspdk_ut.a 00:03:25.041 SO libspdk_ut_mock.so.6.0 00:03:25.041 LIB libspdk_log.a 00:03:25.041 SO libspdk_ut.so.2.0 00:03:25.041 SO libspdk_log.so.7.1 00:03:25.041 SYMLINK libspdk_ut_mock.so 00:03:25.041 SYMLINK libspdk_ut.so 00:03:25.041 SYMLINK libspdk_log.so 00:03:25.041 CC lib/dma/dma.o 00:03:25.041 CXX lib/trace_parser/trace.o 00:03:25.041 CC lib/util/base64.o 00:03:25.041 CC lib/util/bit_array.o 00:03:25.041 CC lib/util/cpuset.o 00:03:25.041 CC lib/ioat/ioat.o 00:03:25.041 CC lib/util/crc32c.o 00:03:25.041 CC lib/util/crc16.o 00:03:25.041 CC lib/util/crc32.o 00:03:25.041 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.041 CC lib/util/crc32_ieee.o 00:03:25.041 CC lib/util/crc64.o 00:03:25.041 CC lib/util/dif.o 00:03:25.041 CC lib/util/fd.o 00:03:25.041 LIB libspdk_dma.a 00:03:25.041 CC lib/util/fd_group.o 00:03:25.041 SO libspdk_dma.so.5.0 00:03:25.041 CC lib/util/file.o 00:03:25.041 LIB libspdk_ioat.a 00:03:25.041 CC lib/util/hexlify.o 00:03:25.041 CC lib/vfio_user/host/vfio_user.o 00:03:25.041 SYMLINK libspdk_dma.so 00:03:25.041 CC lib/util/iov.o 00:03:25.041 CC lib/util/math.o 00:03:25.041 SO libspdk_ioat.so.7.0 00:03:25.041 CC lib/util/net.o 00:03:25.041 SYMLINK libspdk_ioat.so 00:03:25.041 CC lib/util/pipe.o 00:03:25.041 CC lib/util/strerror_tls.o 00:03:25.041 CC lib/util/string.o 00:03:25.041 CC lib/util/uuid.o 00:03:25.041 LIB libspdk_vfio_user.a 00:03:25.041 CC lib/util/xor.o 00:03:25.041 CC lib/util/zipf.o 00:03:25.041 SO libspdk_vfio_user.so.5.0 00:03:25.041 CC lib/util/md5.o 00:03:25.041 SYMLINK libspdk_vfio_user.so 00:03:25.041 LIB libspdk_util.a 00:03:25.041 SO libspdk_util.so.10.1 00:03:25.041 LIB libspdk_trace_parser.a 00:03:25.041 SO libspdk_trace_parser.so.6.0 00:03:25.041 SYMLINK libspdk_util.so 00:03:25.041 SYMLINK libspdk_trace_parser.so 00:03:25.041 CC lib/json/json_parse.o 00:03:25.041 CC lib/json/json_util.o 00:03:25.041 CC lib/json/json_write.o 00:03:25.041 CC lib/conf/conf.o 00:03:25.041 CC lib/rdma_utils/rdma_utils.o 00:03:25.041 CC lib/vmd/vmd.o 00:03:25.041 CC lib/vmd/led.o 00:03:25.041 CC lib/env_dpdk/env.o 00:03:25.041 CC lib/rdma_provider/common.o 00:03:25.041 CC lib/idxd/idxd.o 00:03:25.300 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.300 CC lib/idxd/idxd_user.o 00:03:25.300 LIB libspdk_conf.a 00:03:25.300 LIB libspdk_rdma_utils.a 00:03:25.300 CC lib/idxd/idxd_kernel.o 00:03:25.300 SO libspdk_conf.so.6.0 00:03:25.300 CC lib/env_dpdk/memory.o 00:03:25.300 SO libspdk_rdma_utils.so.1.0 00:03:25.300 LIB libspdk_json.a 00:03:25.300 SYMLINK libspdk_conf.so 00:03:25.300 SYMLINK libspdk_rdma_utils.so 00:03:25.300 CC lib/env_dpdk/pci.o 00:03:25.300 CC lib/env_dpdk/init.o 00:03:25.558 SO libspdk_json.so.6.0 00:03:25.558 LIB libspdk_rdma_provider.a 00:03:25.558 SO libspdk_rdma_provider.so.6.0 00:03:25.558 SYMLINK libspdk_json.so 00:03:25.558 CC lib/env_dpdk/threads.o 00:03:25.558 CC lib/env_dpdk/pci_ioat.o 00:03:25.558 CC lib/env_dpdk/pci_virtio.o 00:03:25.558 SYMLINK libspdk_rdma_provider.so 00:03:25.558 CC lib/env_dpdk/pci_vmd.o 00:03:25.558 CC lib/env_dpdk/pci_idxd.o 00:03:25.558 CC lib/env_dpdk/pci_event.o 00:03:25.558 LIB libspdk_idxd.a 00:03:25.558 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.558 SO libspdk_idxd.so.12.1 00:03:25.558 CC lib/env_dpdk/sigbus_handler.o 00:03:25.816 SYMLINK libspdk_idxd.so 00:03:25.816 CC lib/env_dpdk/pci_dpdk.o 00:03:25.816 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.816 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.816 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.816 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.816 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.816 LIB libspdk_vmd.a 00:03:25.816 SO libspdk_vmd.so.6.0 00:03:26.074 LIB libspdk_jsonrpc.a 00:03:26.074 SYMLINK libspdk_vmd.so 00:03:26.074 SO libspdk_jsonrpc.so.6.0 00:03:26.074 SYMLINK libspdk_jsonrpc.so 00:03:26.331 CC lib/rpc/rpc.o 00:03:26.587 LIB libspdk_rpc.a 00:03:26.587 SO libspdk_rpc.so.6.0 00:03:26.587 SYMLINK libspdk_rpc.so 00:03:26.587 LIB libspdk_env_dpdk.a 00:03:26.587 SO libspdk_env_dpdk.so.15.1 00:03:26.847 CC lib/keyring/keyring.o 00:03:26.847 CC lib/keyring/keyring_rpc.o 00:03:26.847 CC lib/trace/trace.o 00:03:26.847 CC lib/trace/trace_rpc.o 00:03:26.847 CC lib/trace/trace_flags.o 00:03:26.847 CC lib/notify/notify.o 00:03:26.847 CC lib/notify/notify_rpc.o 00:03:26.847 SYMLINK libspdk_env_dpdk.so 00:03:26.847 LIB libspdk_notify.a 00:03:26.847 LIB libspdk_keyring.a 00:03:26.847 SO libspdk_notify.so.6.0 00:03:26.847 SO libspdk_keyring.so.2.0 00:03:26.847 LIB libspdk_trace.a 00:03:27.108 SO libspdk_trace.so.11.0 00:03:27.108 SYMLINK libspdk_keyring.so 00:03:27.108 SYMLINK libspdk_notify.so 00:03:27.108 SYMLINK libspdk_trace.so 00:03:27.369 CC lib/sock/sock_rpc.o 00:03:27.369 CC lib/sock/sock.o 00:03:27.369 CC lib/thread/thread.o 00:03:27.369 CC lib/thread/iobuf.o 00:03:27.630 LIB libspdk_sock.a 00:03:27.630 SO libspdk_sock.so.10.0 00:03:27.890 SYMLINK libspdk_sock.so 00:03:27.890 CC lib/nvme/nvme_fabric.o 00:03:27.890 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:27.890 CC lib/nvme/nvme_ctrlr.o 00:03:27.890 CC lib/nvme/nvme_ns_cmd.o 00:03:27.890 CC lib/nvme/nvme_pcie.o 00:03:27.890 CC lib/nvme/nvme_qpair.o 00:03:27.890 CC lib/nvme/nvme.o 00:03:27.890 CC lib/nvme/nvme_pcie_common.o 00:03:27.890 CC lib/nvme/nvme_ns.o 00:03:28.580 CC lib/nvme/nvme_quirks.o 00:03:28.580 CC lib/nvme/nvme_transport.o 00:03:28.580 CC lib/nvme/nvme_discovery.o 00:03:28.838 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:28.838 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:28.838 CC lib/nvme/nvme_tcp.o 00:03:28.839 CC lib/nvme/nvme_opal.o 00:03:28.839 LIB libspdk_thread.a 00:03:28.839 CC lib/nvme/nvme_io_msg.o 00:03:29.096 SO libspdk_thread.so.11.0 00:03:29.096 CC lib/nvme/nvme_poll_group.o 00:03:29.096 SYMLINK libspdk_thread.so 00:03:29.096 CC lib/accel/accel.o 00:03:29.096 CC lib/nvme/nvme_zns.o 00:03:29.096 CC lib/nvme/nvme_stubs.o 00:03:29.356 CC lib/nvme/nvme_auth.o 00:03:29.356 CC lib/nvme/nvme_cuse.o 00:03:29.356 CC lib/nvme/nvme_rdma.o 00:03:29.614 CC lib/blob/blobstore.o 00:03:29.614 CC lib/blob/request.o 00:03:29.614 CC lib/blob/zeroes.o 00:03:29.614 CC lib/blob/blob_bs_dev.o 00:03:29.872 CC lib/accel/accel_rpc.o 00:03:29.872 CC lib/accel/accel_sw.o 00:03:30.131 CC lib/init/json_config.o 00:03:30.131 CC lib/init/subsystem.o 00:03:30.131 CC lib/virtio/virtio.o 00:03:30.131 CC lib/fsdev/fsdev.o 00:03:30.131 CC lib/init/subsystem_rpc.o 00:03:30.131 CC lib/init/rpc.o 00:03:30.131 CC lib/virtio/virtio_vhost_user.o 00:03:30.388 CC lib/virtio/virtio_vfio_user.o 00:03:30.388 CC lib/virtio/virtio_pci.o 00:03:30.388 CC lib/fsdev/fsdev_io.o 00:03:30.388 LIB libspdk_accel.a 00:03:30.388 CC lib/fsdev/fsdev_rpc.o 00:03:30.388 SO libspdk_accel.so.16.0 00:03:30.388 LIB libspdk_init.a 00:03:30.388 SO libspdk_init.so.6.0 00:03:30.388 SYMLINK libspdk_accel.so 00:03:30.646 LIB libspdk_nvme.a 00:03:30.646 SYMLINK libspdk_init.so 00:03:30.646 LIB libspdk_virtio.a 00:03:30.646 SO libspdk_virtio.so.7.0 00:03:30.646 CC lib/bdev/bdev_rpc.o 00:03:30.646 CC lib/bdev/bdev_zone.o 00:03:30.646 CC lib/bdev/bdev.o 00:03:30.646 CC lib/bdev/part.o 00:03:30.646 SO libspdk_nvme.so.15.0 00:03:30.646 SYMLINK libspdk_virtio.so 00:03:30.646 CC lib/event/app.o 00:03:30.646 CC lib/event/reactor.o 00:03:30.646 CC lib/event/log_rpc.o 00:03:30.646 LIB libspdk_fsdev.a 00:03:30.905 SO libspdk_fsdev.so.2.0 00:03:30.905 CC lib/event/app_rpc.o 00:03:30.905 SYMLINK libspdk_fsdev.so 00:03:30.905 CC lib/event/scheduler_static.o 00:03:30.905 CC lib/bdev/scsi_nvme.o 00:03:30.905 SYMLINK libspdk_nvme.so 00:03:31.166 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:31.166 LIB libspdk_event.a 00:03:31.166 SO libspdk_event.so.14.0 00:03:31.426 SYMLINK libspdk_event.so 00:03:31.716 LIB libspdk_fuse_dispatcher.a 00:03:31.716 SO libspdk_fuse_dispatcher.so.1.0 00:03:31.716 SYMLINK libspdk_fuse_dispatcher.so 00:03:33.102 LIB libspdk_blob.a 00:03:33.102 SO libspdk_blob.so.11.0 00:03:33.102 SYMLINK libspdk_blob.so 00:03:33.363 CC lib/lvol/lvol.o 00:03:33.363 CC lib/blobfs/blobfs.o 00:03:33.363 CC lib/blobfs/tree.o 00:03:33.624 LIB libspdk_bdev.a 00:03:33.624 SO libspdk_bdev.so.17.0 00:03:33.624 SYMLINK libspdk_bdev.so 00:03:33.885 CC lib/ftl/ftl_core.o 00:03:33.885 CC lib/nbd/nbd_rpc.o 00:03:33.885 CC lib/ftl/ftl_layout.o 00:03:33.885 CC lib/ftl/ftl_init.o 00:03:33.885 CC lib/nbd/nbd.o 00:03:33.885 CC lib/ublk/ublk.o 00:03:33.885 CC lib/nvmf/ctrlr.o 00:03:33.885 CC lib/scsi/dev.o 00:03:33.885 CC lib/scsi/lun.o 00:03:33.885 CC lib/scsi/port.o 00:03:34.145 CC lib/ublk/ublk_rpc.o 00:03:34.145 CC lib/ftl/ftl_debug.o 00:03:34.145 CC lib/scsi/scsi.o 00:03:34.145 CC lib/ftl/ftl_io.o 00:03:34.145 LIB libspdk_blobfs.a 00:03:34.145 LIB libspdk_lvol.a 00:03:34.145 CC lib/ftl/ftl_sb.o 00:03:34.145 SO libspdk_blobfs.so.10.0 00:03:34.145 SO libspdk_lvol.so.10.0 00:03:34.145 LIB libspdk_nbd.a 00:03:34.145 CC lib/nvmf/ctrlr_discovery.o 00:03:34.145 SO libspdk_nbd.so.7.0 00:03:34.145 SYMLINK libspdk_lvol.so 00:03:34.405 SYMLINK libspdk_blobfs.so 00:03:34.405 CC lib/nvmf/ctrlr_bdev.o 00:03:34.405 CC lib/nvmf/subsystem.o 00:03:34.405 CC lib/scsi/scsi_bdev.o 00:03:34.405 CC lib/ftl/ftl_l2p.o 00:03:34.405 SYMLINK libspdk_nbd.so 00:03:34.405 CC lib/ftl/ftl_l2p_flat.o 00:03:34.405 CC lib/ftl/ftl_nv_cache.o 00:03:34.405 CC lib/nvmf/nvmf.o 00:03:34.405 LIB libspdk_ublk.a 00:03:34.405 CC lib/nvmf/nvmf_rpc.o 00:03:34.405 SO libspdk_ublk.so.3.0 00:03:34.405 CC lib/nvmf/transport.o 00:03:34.665 SYMLINK libspdk_ublk.so 00:03:34.665 CC lib/nvmf/tcp.o 00:03:34.665 CC lib/scsi/scsi_pr.o 00:03:34.665 CC lib/nvmf/stubs.o 00:03:34.926 CC lib/scsi/scsi_rpc.o 00:03:34.926 CC lib/scsi/task.o 00:03:35.185 CC lib/nvmf/mdns_server.o 00:03:35.185 CC lib/nvmf/rdma.o 00:03:35.185 CC lib/nvmf/auth.o 00:03:35.185 LIB libspdk_scsi.a 00:03:35.185 CC lib/ftl/ftl_band.o 00:03:35.185 SO libspdk_scsi.so.9.0 00:03:35.185 SYMLINK libspdk_scsi.so 00:03:35.185 CC lib/ftl/ftl_band_ops.o 00:03:35.185 CC lib/ftl/ftl_writer.o 00:03:35.185 CC lib/ftl/ftl_rq.o 00:03:35.444 CC lib/ftl/ftl_reloc.o 00:03:35.444 CC lib/iscsi/conn.o 00:03:35.444 CC lib/ftl/ftl_l2p_cache.o 00:03:35.444 CC lib/iscsi/init_grp.o 00:03:35.444 CC lib/ftl/ftl_p2l.o 00:03:35.705 CC lib/vhost/vhost.o 00:03:35.705 CC lib/ftl/ftl_p2l_log.o 00:03:35.705 CC lib/iscsi/iscsi.o 00:03:35.705 CC lib/iscsi/param.o 00:03:35.967 CC lib/iscsi/portal_grp.o 00:03:35.967 CC lib/iscsi/tgt_node.o 00:03:35.967 CC lib/iscsi/iscsi_subsystem.o 00:03:35.967 CC lib/iscsi/iscsi_rpc.o 00:03:36.232 CC lib/ftl/mngt/ftl_mngt.o 00:03:36.232 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:36.232 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.232 CC lib/iscsi/task.o 00:03:36.232 CC lib/vhost/vhost_rpc.o 00:03:36.232 CC lib/vhost/vhost_scsi.o 00:03:36.232 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.493 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.493 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.493 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.493 CC lib/vhost/vhost_blk.o 00:03:36.493 CC lib/vhost/rte_vhost_user.o 00:03:36.493 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.493 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.752 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.752 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.752 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.752 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.752 CC lib/ftl/utils/ftl_conf.o 00:03:36.752 CC lib/ftl/utils/ftl_md.o 00:03:36.752 CC lib/ftl/utils/ftl_mempool.o 00:03:37.013 CC lib/ftl/utils/ftl_bitmap.o 00:03:37.013 CC lib/ftl/utils/ftl_property.o 00:03:37.013 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:37.013 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:37.013 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:37.013 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:37.274 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:37.274 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:37.274 LIB libspdk_iscsi.a 00:03:37.274 LIB libspdk_nvmf.a 00:03:37.274 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:37.274 SO libspdk_iscsi.so.8.0 00:03:37.274 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:37.274 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:37.274 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:37.274 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:37.274 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:37.274 SO libspdk_nvmf.so.20.0 00:03:37.274 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:37.535 SYMLINK libspdk_iscsi.so 00:03:37.535 CC lib/ftl/base/ftl_base_dev.o 00:03:37.535 LIB libspdk_vhost.a 00:03:37.535 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.535 CC lib/ftl/ftl_trace.o 00:03:37.535 SO libspdk_vhost.so.8.0 00:03:37.535 SYMLINK libspdk_vhost.so 00:03:37.535 SYMLINK libspdk_nvmf.so 00:03:37.795 LIB libspdk_ftl.a 00:03:37.795 SO libspdk_ftl.so.9.0 00:03:38.056 SYMLINK libspdk_ftl.so 00:03:38.315 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.315 CC module/sock/posix/posix.o 00:03:38.315 CC module/accel/error/accel_error.o 00:03:38.574 CC module/keyring/file/keyring.o 00:03:38.575 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.575 CC module/accel/dsa/accel_dsa.o 00:03:38.575 CC module/blob/bdev/blob_bdev.o 00:03:38.575 CC module/accel/ioat/accel_ioat.o 00:03:38.575 CC module/fsdev/aio/fsdev_aio.o 00:03:38.575 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.575 LIB libspdk_env_dpdk_rpc.a 00:03:38.575 SO libspdk_env_dpdk_rpc.so.6.0 00:03:38.575 SYMLINK libspdk_env_dpdk_rpc.so 00:03:38.575 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:38.575 CC module/keyring/file/keyring_rpc.o 00:03:38.575 LIB libspdk_scheduler_dpdk_governor.a 00:03:38.575 LIB libspdk_scheduler_dynamic.a 00:03:38.575 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:38.575 CC module/accel/error/accel_error_rpc.o 00:03:38.575 SO libspdk_scheduler_dynamic.so.4.0 00:03:38.575 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.575 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:38.575 SYMLINK libspdk_scheduler_dynamic.so 00:03:38.575 LIB libspdk_blob_bdev.a 00:03:38.835 CC module/fsdev/aio/linux_aio_mgr.o 00:03:38.835 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.835 LIB libspdk_keyring_file.a 00:03:38.835 SO libspdk_blob_bdev.so.11.0 00:03:38.835 LIB libspdk_accel_error.a 00:03:38.835 LIB libspdk_accel_ioat.a 00:03:38.835 SO libspdk_keyring_file.so.2.0 00:03:38.835 SO libspdk_accel_error.so.2.0 00:03:38.835 SO libspdk_accel_ioat.so.6.0 00:03:38.835 CC module/accel/iaa/accel_iaa.o 00:03:38.835 SYMLINK libspdk_blob_bdev.so 00:03:38.835 SYMLINK libspdk_keyring_file.so 00:03:38.835 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.835 SYMLINK libspdk_accel_error.so 00:03:38.835 SYMLINK libspdk_accel_ioat.so 00:03:38.835 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.835 LIB libspdk_accel_dsa.a 00:03:38.835 SO libspdk_accel_dsa.so.5.0 00:03:38.835 LIB libspdk_scheduler_gscheduler.a 00:03:38.835 SYMLINK libspdk_accel_dsa.so 00:03:38.835 CC module/keyring/linux/keyring.o 00:03:38.835 CC module/keyring/linux/keyring_rpc.o 00:03:38.835 SO libspdk_scheduler_gscheduler.so.4.0 00:03:38.835 LIB libspdk_accel_iaa.a 00:03:39.096 SO libspdk_accel_iaa.so.3.0 00:03:39.096 CC module/bdev/delay/vbdev_delay.o 00:03:39.096 SYMLINK libspdk_accel_iaa.so 00:03:39.096 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.096 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.096 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.096 LIB libspdk_keyring_linux.a 00:03:39.096 CC module/bdev/error/vbdev_error.o 00:03:39.096 SO libspdk_keyring_linux.so.1.0 00:03:39.096 CC module/bdev/gpt/gpt.o 00:03:39.096 SYMLINK libspdk_keyring_linux.so 00:03:39.096 LIB libspdk_fsdev_aio.a 00:03:39.096 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.096 SO libspdk_fsdev_aio.so.1.0 00:03:39.096 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.096 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.096 CC module/bdev/malloc/bdev_malloc.o 00:03:39.096 LIB libspdk_sock_posix.a 00:03:39.096 LIB libspdk_blobfs_bdev.a 00:03:39.096 SO libspdk_sock_posix.so.6.0 00:03:39.096 SO libspdk_blobfs_bdev.so.6.0 00:03:39.385 SYMLINK libspdk_fsdev_aio.so 00:03:39.385 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.385 SYMLINK libspdk_blobfs_bdev.so 00:03:39.385 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.385 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.385 SYMLINK libspdk_sock_posix.so 00:03:39.385 CC module/bdev/null/bdev_null.o 00:03:39.385 LIB libspdk_bdev_delay.a 00:03:39.385 LIB libspdk_bdev_gpt.a 00:03:39.385 CC module/bdev/nvme/bdev_nvme.o 00:03:39.385 LIB libspdk_bdev_error.a 00:03:39.385 SO libspdk_bdev_delay.so.6.0 00:03:39.385 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.385 SO libspdk_bdev_gpt.so.6.0 00:03:39.385 SO libspdk_bdev_error.so.6.0 00:03:39.385 SYMLINK libspdk_bdev_delay.so 00:03:39.385 SYMLINK libspdk_bdev_gpt.so 00:03:39.386 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.386 SYMLINK libspdk_bdev_error.so 00:03:39.646 CC module/bdev/nvme/nvme_rpc.o 00:03:39.646 CC module/bdev/raid/bdev_raid.o 00:03:39.646 LIB libspdk_bdev_malloc.a 00:03:39.646 CC module/bdev/null/bdev_null_rpc.o 00:03:39.646 SO libspdk_bdev_malloc.so.6.0 00:03:39.646 CC module/bdev/split/vbdev_split.o 00:03:39.646 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.646 SYMLINK libspdk_bdev_malloc.so 00:03:39.646 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.646 LIB libspdk_bdev_lvol.a 00:03:39.646 SO libspdk_bdev_lvol.so.6.0 00:03:39.646 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.646 LIB libspdk_bdev_null.a 00:03:39.646 SO libspdk_bdev_null.so.6.0 00:03:39.646 SYMLINK libspdk_bdev_lvol.so 00:03:39.906 SYMLINK libspdk_bdev_null.so 00:03:39.906 CC module/bdev/xnvme/bdev_xnvme.o 00:03:39.906 LIB libspdk_bdev_split.a 00:03:39.906 LIB libspdk_bdev_passthru.a 00:03:39.906 SO libspdk_bdev_split.so.6.0 00:03:39.906 SO libspdk_bdev_passthru.so.6.0 00:03:39.906 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.906 CC module/bdev/aio/bdev_aio.o 00:03:39.906 CC module/bdev/ftl/bdev_ftl.o 00:03:39.906 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.906 SYMLINK libspdk_bdev_split.so 00:03:39.906 SYMLINK libspdk_bdev_passthru.so 00:03:39.906 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:39.906 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.906 LIB libspdk_bdev_zone_block.a 00:03:39.906 SO libspdk_bdev_zone_block.so.6.0 00:03:40.166 SYMLINK libspdk_bdev_zone_block.so 00:03:40.166 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:40.166 LIB libspdk_bdev_xnvme.a 00:03:40.166 SO libspdk_bdev_xnvme.so.3.0 00:03:40.166 CC module/bdev/raid/bdev_raid_sb.o 00:03:40.166 CC module/bdev/raid/raid0.o 00:03:40.166 SYMLINK libspdk_bdev_xnvme.so 00:03:40.166 CC module/bdev/raid/raid1.o 00:03:40.166 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:40.166 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:40.166 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:40.166 CC module/bdev/aio/bdev_aio_rpc.o 00:03:40.166 LIB libspdk_bdev_iscsi.a 00:03:40.166 SO libspdk_bdev_iscsi.so.6.0 00:03:40.426 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:40.426 SYMLINK libspdk_bdev_iscsi.so 00:03:40.426 CC module/bdev/nvme/bdev_mdns_client.o 00:03:40.426 CC module/bdev/raid/concat.o 00:03:40.426 LIB libspdk_bdev_aio.a 00:03:40.426 LIB libspdk_bdev_ftl.a 00:03:40.426 CC module/bdev/nvme/vbdev_opal.o 00:03:40.426 SO libspdk_bdev_ftl.so.6.0 00:03:40.426 SO libspdk_bdev_aio.so.6.0 00:03:40.426 SYMLINK libspdk_bdev_ftl.so 00:03:40.426 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:40.426 SYMLINK libspdk_bdev_aio.so 00:03:40.426 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:40.687 LIB libspdk_bdev_raid.a 00:03:40.687 SO libspdk_bdev_raid.so.6.0 00:03:40.687 LIB libspdk_bdev_virtio.a 00:03:40.687 SO libspdk_bdev_virtio.so.6.0 00:03:40.687 SYMLINK libspdk_bdev_raid.so 00:03:40.687 SYMLINK libspdk_bdev_virtio.so 00:03:42.070 LIB libspdk_bdev_nvme.a 00:03:42.070 SO libspdk_bdev_nvme.so.7.1 00:03:42.330 SYMLINK libspdk_bdev_nvme.so 00:03:42.591 CC module/event/subsystems/sock/sock.o 00:03:42.591 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.591 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.591 CC module/event/subsystems/keyring/keyring.o 00:03:42.591 CC module/event/subsystems/vmd/vmd.o 00:03:42.591 CC module/event/subsystems/fsdev/fsdev.o 00:03:42.591 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.591 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.591 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.591 LIB libspdk_event_vhost_blk.a 00:03:42.591 LIB libspdk_event_sock.a 00:03:42.591 LIB libspdk_event_fsdev.a 00:03:42.591 LIB libspdk_event_iobuf.a 00:03:42.591 LIB libspdk_event_scheduler.a 00:03:42.591 LIB libspdk_event_keyring.a 00:03:42.591 SO libspdk_event_vhost_blk.so.3.0 00:03:42.852 SO libspdk_event_fsdev.so.1.0 00:03:42.852 SO libspdk_event_sock.so.5.0 00:03:42.852 LIB libspdk_event_vmd.a 00:03:42.852 SO libspdk_event_scheduler.so.4.0 00:03:42.852 SO libspdk_event_keyring.so.1.0 00:03:42.852 SO libspdk_event_iobuf.so.3.0 00:03:42.852 SO libspdk_event_vmd.so.6.0 00:03:42.852 SYMLINK libspdk_event_vhost_blk.so 00:03:42.852 SYMLINK libspdk_event_fsdev.so 00:03:42.852 SYMLINK libspdk_event_sock.so 00:03:42.852 SYMLINK libspdk_event_keyring.so 00:03:42.852 SYMLINK libspdk_event_scheduler.so 00:03:42.852 SYMLINK libspdk_event_iobuf.so 00:03:42.852 SYMLINK libspdk_event_vmd.so 00:03:43.113 CC module/event/subsystems/accel/accel.o 00:03:43.113 LIB libspdk_event_accel.a 00:03:43.113 SO libspdk_event_accel.so.6.0 00:03:43.113 SYMLINK libspdk_event_accel.so 00:03:43.374 CC module/event/subsystems/bdev/bdev.o 00:03:43.636 LIB libspdk_event_bdev.a 00:03:43.636 SO libspdk_event_bdev.so.6.0 00:03:43.636 SYMLINK libspdk_event_bdev.so 00:03:43.897 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.897 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.897 CC module/event/subsystems/ublk/ublk.o 00:03:43.897 CC module/event/subsystems/nbd/nbd.o 00:03:43.897 CC module/event/subsystems/scsi/scsi.o 00:03:43.897 LIB libspdk_event_nbd.a 00:03:43.897 LIB libspdk_event_ublk.a 00:03:43.897 LIB libspdk_event_scsi.a 00:03:44.159 SO libspdk_event_nbd.so.6.0 00:03:44.159 SO libspdk_event_ublk.so.3.0 00:03:44.159 SO libspdk_event_scsi.so.6.0 00:03:44.159 SYMLINK libspdk_event_ublk.so 00:03:44.159 SYMLINK libspdk_event_nbd.so 00:03:44.159 LIB libspdk_event_nvmf.a 00:03:44.159 SYMLINK libspdk_event_scsi.so 00:03:44.159 SO libspdk_event_nvmf.so.6.0 00:03:44.159 SYMLINK libspdk_event_nvmf.so 00:03:44.420 CC module/event/subsystems/iscsi/iscsi.o 00:03:44.420 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:44.420 LIB libspdk_event_vhost_scsi.a 00:03:44.420 LIB libspdk_event_iscsi.a 00:03:44.420 SO libspdk_event_vhost_scsi.so.3.0 00:03:44.420 SO libspdk_event_iscsi.so.6.0 00:03:44.681 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.681 SYMLINK libspdk_event_iscsi.so 00:03:44.681 SO libspdk.so.6.0 00:03:44.681 SYMLINK libspdk.so 00:03:44.942 CC test/rpc_client/rpc_client_test.o 00:03:44.942 CC app/trace_record/trace_record.o 00:03:44.942 CXX app/trace/trace.o 00:03:44.942 TEST_HEADER include/spdk/accel.h 00:03:44.942 TEST_HEADER include/spdk/accel_module.h 00:03:44.942 TEST_HEADER include/spdk/assert.h 00:03:44.942 TEST_HEADER include/spdk/barrier.h 00:03:44.942 TEST_HEADER include/spdk/base64.h 00:03:44.942 TEST_HEADER include/spdk/bdev.h 00:03:44.942 TEST_HEADER include/spdk/bdev_module.h 00:03:44.942 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.942 TEST_HEADER include/spdk/bit_array.h 00:03:44.942 TEST_HEADER include/spdk/bit_pool.h 00:03:44.942 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.942 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.942 TEST_HEADER include/spdk/blobfs.h 00:03:44.942 TEST_HEADER include/spdk/blob.h 00:03:44.942 TEST_HEADER include/spdk/conf.h 00:03:44.942 TEST_HEADER include/spdk/config.h 00:03:44.942 CC app/nvmf_tgt/nvmf_main.o 00:03:44.942 TEST_HEADER include/spdk/cpuset.h 00:03:44.942 TEST_HEADER include/spdk/crc16.h 00:03:44.942 TEST_HEADER include/spdk/crc32.h 00:03:44.942 TEST_HEADER include/spdk/crc64.h 00:03:44.942 TEST_HEADER include/spdk/dif.h 00:03:44.942 TEST_HEADER include/spdk/dma.h 00:03:44.942 TEST_HEADER include/spdk/endian.h 00:03:44.942 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.942 TEST_HEADER include/spdk/env.h 00:03:44.942 TEST_HEADER include/spdk/event.h 00:03:44.942 TEST_HEADER include/spdk/fd_group.h 00:03:44.942 TEST_HEADER include/spdk/fd.h 00:03:44.942 TEST_HEADER include/spdk/file.h 00:03:44.942 TEST_HEADER include/spdk/fsdev.h 00:03:44.942 TEST_HEADER include/spdk/fsdev_module.h 00:03:44.942 TEST_HEADER include/spdk/ftl.h 00:03:44.942 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:44.942 CC test/thread/poller_perf/poller_perf.o 00:03:44.942 CC examples/util/zipf/zipf.o 00:03:44.942 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.942 TEST_HEADER include/spdk/hexlify.h 00:03:44.942 TEST_HEADER include/spdk/histogram_data.h 00:03:44.942 TEST_HEADER include/spdk/idxd.h 00:03:44.942 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.942 TEST_HEADER include/spdk/init.h 00:03:44.942 TEST_HEADER include/spdk/ioat.h 00:03:44.942 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.942 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.942 TEST_HEADER include/spdk/json.h 00:03:44.942 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.942 CC test/dma/test_dma/test_dma.o 00:03:44.942 CC test/app/bdev_svc/bdev_svc.o 00:03:44.942 TEST_HEADER include/spdk/keyring.h 00:03:44.942 TEST_HEADER include/spdk/keyring_module.h 00:03:44.942 TEST_HEADER include/spdk/likely.h 00:03:44.942 TEST_HEADER include/spdk/log.h 00:03:44.942 TEST_HEADER include/spdk/lvol.h 00:03:44.942 TEST_HEADER include/spdk/md5.h 00:03:44.942 TEST_HEADER include/spdk/memory.h 00:03:44.942 TEST_HEADER include/spdk/mmio.h 00:03:44.942 TEST_HEADER include/spdk/nbd.h 00:03:44.942 TEST_HEADER include/spdk/net.h 00:03:44.942 TEST_HEADER include/spdk/notify.h 00:03:44.942 TEST_HEADER include/spdk/nvme.h 00:03:44.942 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.942 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.942 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.942 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.942 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.942 CC test/env/mem_callbacks/mem_callbacks.o 00:03:44.942 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.942 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.942 TEST_HEADER include/spdk/nvmf.h 00:03:44.942 TEST_HEADER include/spdk/nvmf_spec.h 00:03:45.203 TEST_HEADER include/spdk/nvmf_transport.h 00:03:45.203 TEST_HEADER include/spdk/opal.h 00:03:45.203 TEST_HEADER include/spdk/opal_spec.h 00:03:45.203 TEST_HEADER include/spdk/pci_ids.h 00:03:45.203 TEST_HEADER include/spdk/pipe.h 00:03:45.203 TEST_HEADER include/spdk/queue.h 00:03:45.203 TEST_HEADER include/spdk/reduce.h 00:03:45.203 TEST_HEADER include/spdk/rpc.h 00:03:45.203 TEST_HEADER include/spdk/scheduler.h 00:03:45.203 TEST_HEADER include/spdk/scsi.h 00:03:45.203 TEST_HEADER include/spdk/scsi_spec.h 00:03:45.203 TEST_HEADER include/spdk/sock.h 00:03:45.203 LINK rpc_client_test 00:03:45.203 TEST_HEADER include/spdk/stdinc.h 00:03:45.203 TEST_HEADER include/spdk/string.h 00:03:45.203 TEST_HEADER include/spdk/thread.h 00:03:45.203 TEST_HEADER include/spdk/trace.h 00:03:45.203 TEST_HEADER include/spdk/trace_parser.h 00:03:45.203 TEST_HEADER include/spdk/tree.h 00:03:45.203 TEST_HEADER include/spdk/ublk.h 00:03:45.203 TEST_HEADER include/spdk/util.h 00:03:45.203 TEST_HEADER include/spdk/uuid.h 00:03:45.203 TEST_HEADER include/spdk/version.h 00:03:45.203 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:45.203 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:45.203 TEST_HEADER include/spdk/vhost.h 00:03:45.203 LINK nvmf_tgt 00:03:45.203 TEST_HEADER include/spdk/vmd.h 00:03:45.203 TEST_HEADER include/spdk/xor.h 00:03:45.203 TEST_HEADER include/spdk/zipf.h 00:03:45.203 CXX test/cpp_headers/accel.o 00:03:45.203 LINK poller_perf 00:03:45.203 LINK zipf 00:03:45.203 LINK spdk_trace_record 00:03:45.203 LINK bdev_svc 00:03:45.203 CXX test/cpp_headers/accel_module.o 00:03:45.203 LINK spdk_trace 00:03:45.203 CXX test/cpp_headers/assert.o 00:03:45.203 CXX test/cpp_headers/barrier.o 00:03:45.464 CC app/iscsi_tgt/iscsi_tgt.o 00:03:45.464 CC app/spdk_tgt/spdk_tgt.o 00:03:45.464 CXX test/cpp_headers/base64.o 00:03:45.464 CC examples/ioat/verify/verify.o 00:03:45.464 CC examples/ioat/perf/perf.o 00:03:45.464 LINK test_dma 00:03:45.464 CC test/app/histogram_perf/histogram_perf.o 00:03:45.464 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.464 LINK mem_callbacks 00:03:45.726 LINK iscsi_tgt 00:03:45.726 CXX test/cpp_headers/bdev.o 00:03:45.726 CC test/event/event_perf/event_perf.o 00:03:45.726 LINK spdk_tgt 00:03:45.727 LINK histogram_perf 00:03:45.727 LINK verify 00:03:45.727 LINK ioat_perf 00:03:45.727 CXX test/cpp_headers/bdev_module.o 00:03:45.727 LINK event_perf 00:03:45.727 CXX test/cpp_headers/bdev_zone.o 00:03:45.727 CC test/env/vtophys/vtophys.o 00:03:45.988 CC app/spdk_lspci/spdk_lspci.o 00:03:45.988 CXX test/cpp_headers/bit_array.o 00:03:45.988 LINK vtophys 00:03:45.988 CC test/accel/dif/dif.o 00:03:45.988 CC test/event/reactor/reactor.o 00:03:45.988 LINK nvme_fuzz 00:03:45.988 CC test/blobfs/mkfs/mkfs.o 00:03:45.988 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.988 LINK spdk_lspci 00:03:45.988 CC test/nvme/aer/aer.o 00:03:45.988 CC test/lvol/esnap/esnap.o 00:03:45.988 CXX test/cpp_headers/bit_pool.o 00:03:45.988 LINK reactor 00:03:46.250 LINK lsvmd 00:03:46.250 LINK mkfs 00:03:46.250 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.250 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:46.250 CXX test/cpp_headers/blob_bdev.o 00:03:46.250 CC app/spdk_nvme_perf/perf.o 00:03:46.250 CC test/event/reactor_perf/reactor_perf.o 00:03:46.250 LINK aer 00:03:46.250 LINK env_dpdk_post_init 00:03:46.511 CC examples/vmd/led/led.o 00:03:46.511 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.511 LINK reactor_perf 00:03:46.511 CC app/spdk_nvme_identify/identify.o 00:03:46.511 LINK led 00:03:46.511 CC test/nvme/reset/reset.o 00:03:46.511 CC test/env/memory/memory_ut.o 00:03:46.511 CXX test/cpp_headers/blobfs.o 00:03:46.773 CC test/event/app_repeat/app_repeat.o 00:03:46.773 LINK dif 00:03:46.773 CXX test/cpp_headers/blob.o 00:03:46.773 LINK app_repeat 00:03:46.773 CC examples/idxd/perf/perf.o 00:03:46.773 LINK reset 00:03:46.773 CXX test/cpp_headers/conf.o 00:03:47.034 CC test/nvme/sgl/sgl.o 00:03:47.034 CC test/event/scheduler/scheduler.o 00:03:47.034 CXX test/cpp_headers/config.o 00:03:47.034 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.034 CXX test/cpp_headers/cpuset.o 00:03:47.034 LINK spdk_nvme_perf 00:03:47.034 LINK idxd_perf 00:03:47.294 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.294 LINK sgl 00:03:47.294 CXX test/cpp_headers/crc16.o 00:03:47.294 LINK scheduler 00:03:47.294 LINK spdk_nvme_identify 00:03:47.294 CXX test/cpp_headers/crc32.o 00:03:47.294 CC test/app/jsoncat/jsoncat.o 00:03:47.294 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:47.294 CC test/nvme/e2edp/nvme_dp.o 00:03:47.556 CXX test/cpp_headers/crc64.o 00:03:47.556 LINK jsoncat 00:03:47.556 CC app/spdk_nvme_discover/discovery_aer.o 00:03:47.556 CC app/spdk_top/spdk_top.o 00:03:47.556 LINK interrupt_tgt 00:03:47.556 LINK vhost_fuzz 00:03:47.556 CXX test/cpp_headers/dif.o 00:03:47.816 LINK nvme_dp 00:03:47.816 LINK spdk_nvme_discover 00:03:47.816 CC test/app/stub/stub.o 00:03:47.816 LINK memory_ut 00:03:47.816 CXX test/cpp_headers/dma.o 00:03:47.816 CXX test/cpp_headers/endian.o 00:03:47.816 LINK stub 00:03:47.816 CC test/nvme/overhead/overhead.o 00:03:47.816 CC test/bdev/bdevio/bdevio.o 00:03:48.078 CC examples/thread/thread/thread_ex.o 00:03:48.078 CC test/env/pci/pci_ut.o 00:03:48.078 CXX test/cpp_headers/env_dpdk.o 00:03:48.078 LINK iscsi_fuzz 00:03:48.078 CC app/vhost/vhost.o 00:03:48.078 CC app/spdk_dd/spdk_dd.o 00:03:48.078 CXX test/cpp_headers/env.o 00:03:48.078 LINK thread 00:03:48.338 LINK overhead 00:03:48.338 CXX test/cpp_headers/event.o 00:03:48.338 LINK vhost 00:03:48.338 LINK bdevio 00:03:48.338 CXX test/cpp_headers/fd_group.o 00:03:48.338 CC test/nvme/err_injection/err_injection.o 00:03:48.338 LINK pci_ut 00:03:48.338 CXX test/cpp_headers/fd.o 00:03:48.598 LINK spdk_top 00:03:48.598 LINK spdk_dd 00:03:48.598 CC examples/sock/hello_world/hello_sock.o 00:03:48.598 CC app/fio/nvme/fio_plugin.o 00:03:48.598 CXX test/cpp_headers/file.o 00:03:48.598 CC app/fio/bdev/fio_plugin.o 00:03:48.598 CXX test/cpp_headers/fsdev.o 00:03:48.598 LINK err_injection 00:03:48.598 CXX test/cpp_headers/fsdev_module.o 00:03:48.598 CC examples/accel/perf/accel_perf.o 00:03:48.859 LINK hello_sock 00:03:48.859 CC examples/blob/hello_world/hello_blob.o 00:03:48.859 CC examples/blob/cli/blobcli.o 00:03:48.859 CC test/nvme/startup/startup.o 00:03:48.859 CXX test/cpp_headers/ftl.o 00:03:48.859 CXX test/cpp_headers/fuse_dispatcher.o 00:03:48.859 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:49.120 LINK startup 00:03:49.120 CXX test/cpp_headers/gpt_spec.o 00:03:49.120 LINK hello_blob 00:03:49.120 LINK spdk_bdev 00:03:49.120 LINK spdk_nvme 00:03:49.120 CXX test/cpp_headers/hexlify.o 00:03:49.120 CC examples/nvme/hello_world/hello_world.o 00:03:49.120 CXX test/cpp_headers/histogram_data.o 00:03:49.120 CXX test/cpp_headers/idxd.o 00:03:49.120 LINK accel_perf 00:03:49.120 CC test/nvme/reserve/reserve.o 00:03:49.120 CXX test/cpp_headers/idxd_spec.o 00:03:49.120 LINK hello_fsdev 00:03:49.381 LINK blobcli 00:03:49.381 CXX test/cpp_headers/init.o 00:03:49.382 CXX test/cpp_headers/ioat.o 00:03:49.382 CXX test/cpp_headers/ioat_spec.o 00:03:49.382 CXX test/cpp_headers/iscsi_spec.o 00:03:49.382 CXX test/cpp_headers/json.o 00:03:49.382 LINK hello_world 00:03:49.382 LINK reserve 00:03:49.382 CXX test/cpp_headers/jsonrpc.o 00:03:49.382 CXX test/cpp_headers/keyring.o 00:03:49.382 CXX test/cpp_headers/keyring_module.o 00:03:49.382 CXX test/cpp_headers/likely.o 00:03:49.382 CC examples/bdev/hello_world/hello_bdev.o 00:03:49.382 CXX test/cpp_headers/log.o 00:03:49.644 CXX test/cpp_headers/lvol.o 00:03:49.644 CC examples/nvme/reconnect/reconnect.o 00:03:49.644 CXX test/cpp_headers/md5.o 00:03:49.644 CXX test/cpp_headers/memory.o 00:03:49.644 CXX test/cpp_headers/mmio.o 00:03:49.644 CC test/nvme/simple_copy/simple_copy.o 00:03:49.644 CC examples/bdev/bdevperf/bdevperf.o 00:03:49.644 LINK hello_bdev 00:03:49.644 CC test/nvme/connect_stress/connect_stress.o 00:03:49.644 CXX test/cpp_headers/nbd.o 00:03:49.644 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:49.932 CXX test/cpp_headers/net.o 00:03:49.932 CC examples/nvme/arbitration/arbitration.o 00:03:49.932 LINK simple_copy 00:03:49.932 CC test/nvme/boot_partition/boot_partition.o 00:03:49.932 LINK connect_stress 00:03:49.932 CC examples/nvme/hotplug/hotplug.o 00:03:49.932 LINK reconnect 00:03:49.932 CXX test/cpp_headers/notify.o 00:03:49.932 LINK boot_partition 00:03:49.932 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:49.932 CC examples/nvme/abort/abort.o 00:03:50.193 LINK arbitration 00:03:50.193 CXX test/cpp_headers/nvme.o 00:03:50.193 CC test/nvme/compliance/nvme_compliance.o 00:03:50.193 LINK hotplug 00:03:50.193 LINK cmb_copy 00:03:50.193 CC test/nvme/fused_ordering/fused_ordering.o 00:03:50.193 CXX test/cpp_headers/nvme_intel.o 00:03:50.193 LINK nvme_manage 00:03:50.193 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:50.193 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:50.453 CC test/nvme/fdp/fdp.o 00:03:50.453 LINK fused_ordering 00:03:50.453 CXX test/cpp_headers/nvme_ocssd.o 00:03:50.453 LINK abort 00:03:50.453 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:50.453 LINK bdevperf 00:03:50.453 LINK pmr_persistence 00:03:50.453 LINK nvme_compliance 00:03:50.453 LINK doorbell_aers 00:03:50.453 CXX test/cpp_headers/nvme_spec.o 00:03:50.453 CXX test/cpp_headers/nvme_zns.o 00:03:50.453 CXX test/cpp_headers/nvmf_cmd.o 00:03:50.453 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:50.453 CXX test/cpp_headers/nvmf.o 00:03:50.714 CC test/nvme/cuse/cuse.o 00:03:50.714 CXX test/cpp_headers/nvmf_spec.o 00:03:50.714 LINK fdp 00:03:50.714 CXX test/cpp_headers/nvmf_transport.o 00:03:50.714 CXX test/cpp_headers/opal.o 00:03:50.714 CXX test/cpp_headers/opal_spec.o 00:03:50.714 CXX test/cpp_headers/pci_ids.o 00:03:50.714 CXX test/cpp_headers/pipe.o 00:03:50.714 CXX test/cpp_headers/queue.o 00:03:50.714 CC examples/nvmf/nvmf/nvmf.o 00:03:50.714 CXX test/cpp_headers/reduce.o 00:03:50.714 CXX test/cpp_headers/rpc.o 00:03:50.714 CXX test/cpp_headers/scheduler.o 00:03:50.714 CXX test/cpp_headers/scsi.o 00:03:50.714 CXX test/cpp_headers/scsi_spec.o 00:03:50.974 CXX test/cpp_headers/sock.o 00:03:50.975 CXX test/cpp_headers/stdinc.o 00:03:50.975 CXX test/cpp_headers/string.o 00:03:50.975 CXX test/cpp_headers/thread.o 00:03:50.975 CXX test/cpp_headers/trace.o 00:03:50.975 CXX test/cpp_headers/trace_parser.o 00:03:50.975 CXX test/cpp_headers/tree.o 00:03:50.975 CXX test/cpp_headers/ublk.o 00:03:50.975 CXX test/cpp_headers/util.o 00:03:50.975 CXX test/cpp_headers/uuid.o 00:03:50.975 CXX test/cpp_headers/version.o 00:03:50.975 LINK nvmf 00:03:50.975 CXX test/cpp_headers/vfio_user_pci.o 00:03:50.975 CXX test/cpp_headers/vfio_user_spec.o 00:03:51.235 CXX test/cpp_headers/vhost.o 00:03:51.235 CXX test/cpp_headers/vmd.o 00:03:51.235 CXX test/cpp_headers/xor.o 00:03:51.235 CXX test/cpp_headers/zipf.o 00:03:51.496 LINK cuse 00:03:51.757 LINK esnap 00:03:52.018 00:03:52.018 real 1m6.498s 00:03:52.018 user 6m18.561s 00:03:52.018 sys 1m7.278s 00:03:52.018 10:03:57 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:03:52.018 10:03:57 make -- common/autotest_common.sh@10 -- $ set +x 00:03:52.018 ************************************ 00:03:52.018 END TEST make 00:03:52.018 ************************************ 00:03:52.279 10:03:57 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:52.279 10:03:57 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:52.279 10:03:57 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:52.279 10:03:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.279 10:03:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:52.279 10:03:57 -- pm/common@44 -- $ pid=5070 00:03:52.279 10:03:57 -- pm/common@50 -- $ kill -TERM 5070 00:03:52.279 10:03:57 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.279 10:03:57 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:52.279 10:03:57 -- pm/common@44 -- $ pid=5071 00:03:52.279 10:03:57 -- pm/common@50 -- $ kill -TERM 5071 00:03:52.279 10:03:57 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:52.279 10:03:57 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:52.279 10:03:57 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:03:52.279 10:03:57 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:03:52.279 10:03:57 -- common/autotest_common.sh@1691 -- # lcov --version 00:03:52.279 10:03:57 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:03:52.279 10:03:57 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.279 10:03:57 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.279 10:03:57 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.279 10:03:57 -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.279 10:03:57 -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.279 10:03:57 -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.279 10:03:57 -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.279 10:03:57 -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.279 10:03:57 -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.279 10:03:57 -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.279 10:03:57 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.279 10:03:57 -- scripts/common.sh@344 -- # case "$op" in 00:03:52.279 10:03:57 -- scripts/common.sh@345 -- # : 1 00:03:52.279 10:03:57 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.279 10:03:57 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.279 10:03:57 -- scripts/common.sh@365 -- # decimal 1 00:03:52.279 10:03:57 -- scripts/common.sh@353 -- # local d=1 00:03:52.279 10:03:57 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.279 10:03:57 -- scripts/common.sh@355 -- # echo 1 00:03:52.279 10:03:57 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.279 10:03:57 -- scripts/common.sh@366 -- # decimal 2 00:03:52.279 10:03:57 -- scripts/common.sh@353 -- # local d=2 00:03:52.279 10:03:57 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.279 10:03:57 -- scripts/common.sh@355 -- # echo 2 00:03:52.279 10:03:57 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.279 10:03:57 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.279 10:03:57 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.279 10:03:57 -- scripts/common.sh@368 -- # return 0 00:03:52.279 10:03:57 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.279 10:03:57 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:03:52.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.279 --rc genhtml_branch_coverage=1 00:03:52.279 --rc genhtml_function_coverage=1 00:03:52.279 --rc genhtml_legend=1 00:03:52.279 --rc geninfo_all_blocks=1 00:03:52.279 --rc geninfo_unexecuted_blocks=1 00:03:52.279 00:03:52.279 ' 00:03:52.279 10:03:57 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:03:52.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.279 --rc genhtml_branch_coverage=1 00:03:52.279 --rc genhtml_function_coverage=1 00:03:52.279 --rc genhtml_legend=1 00:03:52.279 --rc geninfo_all_blocks=1 00:03:52.279 --rc geninfo_unexecuted_blocks=1 00:03:52.279 00:03:52.279 ' 00:03:52.279 10:03:57 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:03:52.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.279 --rc genhtml_branch_coverage=1 00:03:52.279 --rc genhtml_function_coverage=1 00:03:52.279 --rc genhtml_legend=1 00:03:52.279 --rc geninfo_all_blocks=1 00:03:52.279 --rc geninfo_unexecuted_blocks=1 00:03:52.279 00:03:52.279 ' 00:03:52.279 10:03:57 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:03:52.279 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.279 --rc genhtml_branch_coverage=1 00:03:52.279 --rc genhtml_function_coverage=1 00:03:52.279 --rc genhtml_legend=1 00:03:52.279 --rc geninfo_all_blocks=1 00:03:52.279 --rc geninfo_unexecuted_blocks=1 00:03:52.279 00:03:52.279 ' 00:03:52.279 10:03:57 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:52.279 10:03:57 -- nvmf/common.sh@7 -- # uname -s 00:03:52.279 10:03:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.279 10:03:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.279 10:03:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.279 10:03:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.279 10:03:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.279 10:03:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.279 10:03:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.280 10:03:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.280 10:03:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.280 10:03:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.280 10:03:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:03:52.280 10:03:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:03:52.280 10:03:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.280 10:03:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.280 10:03:57 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:52.280 10:03:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.280 10:03:57 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:52.280 10:03:57 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.280 10:03:57 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.280 10:03:57 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.280 10:03:57 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.280 10:03:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.280 10:03:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.280 10:03:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.280 10:03:57 -- paths/export.sh@5 -- # export PATH 00:03:52.280 10:03:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.280 10:03:57 -- nvmf/common.sh@51 -- # : 0 00:03:52.280 10:03:57 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.280 10:03:57 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.280 10:03:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.280 10:03:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.280 10:03:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.280 10:03:58 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.280 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.280 10:03:58 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.280 10:03:58 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.280 10:03:58 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.280 10:03:58 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.280 10:03:58 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.280 10:03:58 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.280 10:03:58 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.280 10:03:58 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.280 10:03:58 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.280 10:03:58 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:52.280 10:03:58 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.541 10:03:58 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.542 10:03:58 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.542 10:03:58 -- spdk/autotest.sh@48 -- # udevadm_pid=54224 00:03:52.542 10:03:58 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.542 10:03:58 -- pm/common@17 -- # local monitor 00:03:52.542 10:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.542 10:03:58 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.542 10:03:58 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.542 10:03:58 -- pm/common@25 -- # sleep 1 00:03:52.542 10:03:58 -- pm/common@21 -- # date +%s 00:03:52.542 10:03:58 -- pm/common@21 -- # date +%s 00:03:52.542 10:03:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730714638 00:03:52.542 10:03:58 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730714638 00:03:52.542 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730714638_collect-cpu-load.pm.log 00:03:52.542 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730714638_collect-vmstat.pm.log 00:03:53.486 10:03:59 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.486 10:03:59 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.486 10:03:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:53.486 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:53.486 10:03:59 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.486 10:03:59 -- common/autotest_common.sh@750 -- # xtrace_disable 00:03:53.486 10:03:59 -- common/autotest_common.sh@10 -- # set +x 00:03:53.486 10:03:59 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:53.486 10:03:59 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:53.486 10:03:59 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:53.486 10:03:59 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:53.486 10:03:59 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:53.486 10:03:59 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.486 10:03:59 -- common/autotest_common.sh@1455 -- # uname 00:03:53.486 10:03:59 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:53.486 10:03:59 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.486 10:03:59 -- common/autotest_common.sh@1475 -- # uname 00:03:53.486 10:03:59 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:53.486 10:03:59 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:53.486 10:03:59 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:53.486 lcov: LCOV version 1.15 00:03:53.486 10:03:59 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:11.628 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:11.628 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:26.503 10:04:29 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:26.503 10:04:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.503 10:04:29 -- common/autotest_common.sh@10 -- # set +x 00:04:26.503 10:04:29 -- spdk/autotest.sh@78 -- # rm -f 00:04:26.503 10:04:29 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.503 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.503 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:26.503 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:26.503 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:26.503 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:26.503 10:04:30 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:26.503 10:04:30 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:26.503 10:04:30 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:26.503 10:04:30 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:26.503 10:04:30 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:26.503 10:04:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:26.503 10:04:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:26.503 10:04:30 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:26.503 10:04:30 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:30 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:30 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:26.503 10:04:30 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:26.503 10:04:30 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0436572 s, 24.0 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:26.503 10:04:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:26.503 10:04:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00591423 s, 177 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:26.503 10:04:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:26.503 10:04:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00499349 s, 210 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:26.503 10:04:31 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:26.503 10:04:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00500035 s, 210 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:26.503 10:04:31 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:26.503 10:04:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00596325 s, 176 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:26.503 10:04:31 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:26.503 10:04:31 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:26.503 10:04:31 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:26.503 10:04:31 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:26.503 No valid GPT data, bailing 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:26.503 10:04:31 -- scripts/common.sh@394 -- # pt= 00:04:26.503 10:04:31 -- scripts/common.sh@395 -- # return 1 00:04:26.503 10:04:31 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:26.503 1+0 records in 00:04:26.503 1+0 records out 00:04:26.503 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00557936 s, 188 MB/s 00:04:26.503 10:04:31 -- spdk/autotest.sh@105 -- # sync 00:04:26.503 10:04:31 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:26.503 10:04:31 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:26.503 10:04:31 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:27.438 10:04:33 -- spdk/autotest.sh@111 -- # uname -s 00:04:27.438 10:04:33 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:27.438 10:04:33 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:27.438 10:04:33 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:28.004 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:28.568 Hugepages 00:04:28.568 node hugesize free / total 00:04:28.568 node0 1048576kB 0 / 0 00:04:28.568 node0 2048kB 0 / 0 00:04:28.568 00:04:28.568 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:28.568 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:28.568 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:28.568 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:28.568 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:28.827 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:28.827 10:04:34 -- spdk/autotest.sh@117 -- # uname -s 00:04:28.827 10:04:34 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:28.827 10:04:34 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:28.827 10:04:34 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:29.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:29.649 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.649 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.649 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.907 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:29.907 10:04:35 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:30.841 10:04:36 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:30.841 10:04:36 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:30.841 10:04:36 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:30.841 10:04:36 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:30.841 10:04:36 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:30.841 10:04:36 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:30.841 10:04:36 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:30.841 10:04:36 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:30.841 10:04:36 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:30.841 10:04:36 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:30.841 10:04:36 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:30.841 10:04:36 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:31.099 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:31.356 Waiting for block devices as requested 00:04:31.356 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.614 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.614 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.614 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:36.913 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:36.913 10:04:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:36.913 10:04:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:36.913 10:04:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:36.913 10:04:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:36.913 10:04:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:36.913 10:04:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:36.913 10:04:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:36.913 10:04:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:36.913 10:04:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:36.913 10:04:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:36.913 10:04:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:36.913 10:04:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:36.913 10:04:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:36.913 10:04:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1541 -- # continue 00:04:36.914 10:04:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:36.914 10:04:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1541 -- # continue 00:04:36.914 10:04:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:36.914 10:04:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1541 -- # continue 00:04:36.914 10:04:42 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:36.914 10:04:42 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:36.914 10:04:42 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:36.914 10:04:42 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:36.914 10:04:42 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:36.914 10:04:42 -- common/autotest_common.sh@1541 -- # continue 00:04:36.914 10:04:42 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:36.914 10:04:42 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:36.914 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:36.914 10:04:42 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:36.914 10:04:42 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:36.914 10:04:42 -- common/autotest_common.sh@10 -- # set +x 00:04:36.914 10:04:42 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:37.172 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:37.737 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.737 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.737 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.737 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:37.737 10:04:43 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:37.737 10:04:43 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.737 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.737 10:04:43 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:37.737 10:04:43 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:37.737 10:04:43 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:37.737 10:04:43 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:37.737 10:04:43 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:37.737 10:04:43 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:37.737 10:04:43 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:37.737 10:04:43 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:37.737 10:04:43 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:37.737 10:04:43 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:37.737 10:04:43 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:37.737 10:04:43 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:37.737 10:04:43 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:37.996 10:04:43 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:37.996 10:04:43 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:37.996 10:04:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:37.996 10:04:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.996 10:04:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:37.996 10:04:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.996 10:04:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:37.996 10:04:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.996 10:04:43 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:37.996 10:04:43 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:37.996 10:04:43 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:37.996 10:04:43 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:37.996 10:04:43 -- common/autotest_common.sh@1570 -- # return 0 00:04:37.996 10:04:43 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:37.996 10:04:43 -- common/autotest_common.sh@1578 -- # return 0 00:04:37.996 10:04:43 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:37.996 10:04:43 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:37.996 10:04:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.996 10:04:43 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:37.996 10:04:43 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:37.996 10:04:43 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.996 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.996 10:04:43 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:37.996 10:04:43 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.996 10:04:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.996 10:04:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.996 10:04:43 -- common/autotest_common.sh@10 -- # set +x 00:04:37.996 ************************************ 00:04:37.996 START TEST env 00:04:37.996 ************************************ 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:37.996 * Looking for test storage... 00:04:37.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:37.996 10:04:43 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.996 10:04:43 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.996 10:04:43 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.996 10:04:43 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.996 10:04:43 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.996 10:04:43 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.996 10:04:43 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.996 10:04:43 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.996 10:04:43 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.996 10:04:43 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.996 10:04:43 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.996 10:04:43 env -- scripts/common.sh@344 -- # case "$op" in 00:04:37.996 10:04:43 env -- scripts/common.sh@345 -- # : 1 00:04:37.996 10:04:43 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.996 10:04:43 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.996 10:04:43 env -- scripts/common.sh@365 -- # decimal 1 00:04:37.996 10:04:43 env -- scripts/common.sh@353 -- # local d=1 00:04:37.996 10:04:43 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.996 10:04:43 env -- scripts/common.sh@355 -- # echo 1 00:04:37.996 10:04:43 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.996 10:04:43 env -- scripts/common.sh@366 -- # decimal 2 00:04:37.996 10:04:43 env -- scripts/common.sh@353 -- # local d=2 00:04:37.996 10:04:43 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.996 10:04:43 env -- scripts/common.sh@355 -- # echo 2 00:04:37.996 10:04:43 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.996 10:04:43 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.996 10:04:43 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.996 10:04:43 env -- scripts/common.sh@368 -- # return 0 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.996 --rc genhtml_branch_coverage=1 00:04:37.996 --rc genhtml_function_coverage=1 00:04:37.996 --rc genhtml_legend=1 00:04:37.996 --rc geninfo_all_blocks=1 00:04:37.996 --rc geninfo_unexecuted_blocks=1 00:04:37.996 00:04:37.996 ' 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.996 --rc genhtml_branch_coverage=1 00:04:37.996 --rc genhtml_function_coverage=1 00:04:37.996 --rc genhtml_legend=1 00:04:37.996 --rc geninfo_all_blocks=1 00:04:37.996 --rc geninfo_unexecuted_blocks=1 00:04:37.996 00:04:37.996 ' 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.996 --rc genhtml_branch_coverage=1 00:04:37.996 --rc genhtml_function_coverage=1 00:04:37.996 --rc genhtml_legend=1 00:04:37.996 --rc geninfo_all_blocks=1 00:04:37.996 --rc geninfo_unexecuted_blocks=1 00:04:37.996 00:04:37.996 ' 00:04:37.996 10:04:43 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:37.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.996 --rc genhtml_branch_coverage=1 00:04:37.996 --rc genhtml_function_coverage=1 00:04:37.996 --rc genhtml_legend=1 00:04:37.996 --rc geninfo_all_blocks=1 00:04:37.996 --rc geninfo_unexecuted_blocks=1 00:04:37.996 00:04:37.996 ' 00:04:37.997 10:04:43 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.997 10:04:43 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:37.997 10:04:43 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:37.997 10:04:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:37.997 ************************************ 00:04:37.997 START TEST env_memory 00:04:37.997 ************************************ 00:04:37.997 10:04:43 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:37.997 00:04:37.997 00:04:37.997 CUnit - A unit testing framework for C - Version 2.1-3 00:04:37.997 http://cunit.sourceforge.net/ 00:04:37.997 00:04:37.997 00:04:37.997 Suite: memory 00:04:38.254 Test: alloc and free memory map ...[2024-11-04 10:04:43.751929] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:38.254 passed 00:04:38.254 Test: mem map translation ...[2024-11-04 10:04:43.791928] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:38.254 [2024-11-04 10:04:43.791997] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:38.254 [2024-11-04 10:04:43.792059] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:38.254 [2024-11-04 10:04:43.792075] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:38.254 passed 00:04:38.254 Test: mem map registration ...[2024-11-04 10:04:43.862070] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:38.254 [2024-11-04 10:04:43.862129] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:38.254 passed 00:04:38.254 Test: mem map adjacent registrations ...passed 00:04:38.254 00:04:38.254 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.254 suites 1 1 n/a 0 0 00:04:38.254 tests 4 4 4 0 0 00:04:38.254 asserts 152 152 152 0 n/a 00:04:38.254 00:04:38.254 Elapsed time = 0.236 seconds 00:04:38.254 00:04:38.254 real 0m0.264s 00:04:38.254 user 0m0.244s 00:04:38.254 sys 0m0.015s 00:04:38.254 10:04:43 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:38.254 10:04:43 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:38.254 ************************************ 00:04:38.254 END TEST env_memory 00:04:38.254 ************************************ 00:04:38.512 10:04:43 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:38.512 10:04:43 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:38.512 10:04:43 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:38.512 10:04:43 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.512 ************************************ 00:04:38.512 START TEST env_vtophys 00:04:38.512 ************************************ 00:04:38.512 10:04:44 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:38.512 EAL: lib.eal log level changed from notice to debug 00:04:38.512 EAL: Detected lcore 0 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 1 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 2 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 3 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 4 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 5 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 6 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 7 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 8 as core 0 on socket 0 00:04:38.512 EAL: Detected lcore 9 as core 0 on socket 0 00:04:38.512 EAL: Maximum logical cores by configuration: 128 00:04:38.512 EAL: Detected CPU lcores: 10 00:04:38.512 EAL: Detected NUMA nodes: 1 00:04:38.512 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:38.512 EAL: Detected shared linkage of DPDK 00:04:38.512 EAL: No shared files mode enabled, IPC will be disabled 00:04:38.512 EAL: Selected IOVA mode 'PA' 00:04:38.512 EAL: Probing VFIO support... 00:04:38.512 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:38.512 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:38.512 EAL: Ask a virtual area of 0x2e000 bytes 00:04:38.512 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:38.512 EAL: Setting up physically contiguous memory... 00:04:38.512 EAL: Setting maximum number of open files to 524288 00:04:38.512 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:38.512 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:38.512 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.512 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:38.512 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.512 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.512 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:38.512 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:38.512 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.512 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:38.512 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.512 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.512 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:38.512 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:38.512 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.512 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:38.512 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.512 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.512 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:38.512 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:38.512 EAL: Ask a virtual area of 0x61000 bytes 00:04:38.513 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:38.513 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:38.513 EAL: Ask a virtual area of 0x400000000 bytes 00:04:38.513 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:38.513 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:38.513 EAL: Hugepages will be freed exactly as allocated. 00:04:38.513 EAL: No shared files mode enabled, IPC is disabled 00:04:38.513 EAL: No shared files mode enabled, IPC is disabled 00:04:38.513 EAL: TSC frequency is ~2600000 KHz 00:04:38.513 EAL: Main lcore 0 is ready (tid=7f78a5914a40;cpuset=[0]) 00:04:38.513 EAL: Trying to obtain current memory policy. 00:04:38.513 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.513 EAL: Restoring previous memory policy: 0 00:04:38.513 EAL: request: mp_malloc_sync 00:04:38.513 EAL: No shared files mode enabled, IPC is disabled 00:04:38.513 EAL: Heap on socket 0 was expanded by 2MB 00:04:38.513 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:38.513 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:38.513 EAL: Mem event callback 'spdk:(nil)' registered 00:04:38.513 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:38.513 00:04:38.513 00:04:38.513 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.513 http://cunit.sourceforge.net/ 00:04:38.513 00:04:38.513 00:04:38.513 Suite: components_suite 00:04:38.770 Test: vtophys_malloc_test ...passed 00:04:38.770 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:38.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.770 EAL: Restoring previous memory policy: 4 00:04:38.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.770 EAL: request: mp_malloc_sync 00:04:38.770 EAL: No shared files mode enabled, IPC is disabled 00:04:38.770 EAL: Heap on socket 0 was expanded by 4MB 00:04:38.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.770 EAL: request: mp_malloc_sync 00:04:38.770 EAL: No shared files mode enabled, IPC is disabled 00:04:38.770 EAL: Heap on socket 0 was shrunk by 4MB 00:04:38.770 EAL: Trying to obtain current memory policy. 00:04:38.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:38.770 EAL: Restoring previous memory policy: 4 00:04:38.770 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.770 EAL: request: mp_malloc_sync 00:04:38.770 EAL: No shared files mode enabled, IPC is disabled 00:04:38.770 EAL: Heap on socket 0 was expanded by 6MB 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was shrunk by 6MB 00:04:39.030 EAL: Trying to obtain current memory policy. 00:04:39.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.030 EAL: Restoring previous memory policy: 4 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was expanded by 10MB 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was shrunk by 10MB 00:04:39.030 EAL: Trying to obtain current memory policy. 00:04:39.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.030 EAL: Restoring previous memory policy: 4 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was expanded by 18MB 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was shrunk by 18MB 00:04:39.030 EAL: Trying to obtain current memory policy. 00:04:39.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.030 EAL: Restoring previous memory policy: 4 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was expanded by 34MB 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was shrunk by 34MB 00:04:39.030 EAL: Trying to obtain current memory policy. 00:04:39.030 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.030 EAL: Restoring previous memory policy: 4 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was expanded by 66MB 00:04:39.030 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.030 EAL: request: mp_malloc_sync 00:04:39.030 EAL: No shared files mode enabled, IPC is disabled 00:04:39.030 EAL: Heap on socket 0 was shrunk by 66MB 00:04:39.298 EAL: Trying to obtain current memory policy. 00:04:39.298 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.298 EAL: Restoring previous memory policy: 4 00:04:39.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.298 EAL: request: mp_malloc_sync 00:04:39.298 EAL: No shared files mode enabled, IPC is disabled 00:04:39.298 EAL: Heap on socket 0 was expanded by 130MB 00:04:39.298 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.298 EAL: request: mp_malloc_sync 00:04:39.298 EAL: No shared files mode enabled, IPC is disabled 00:04:39.298 EAL: Heap on socket 0 was shrunk by 130MB 00:04:39.557 EAL: Trying to obtain current memory policy. 00:04:39.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:39.557 EAL: Restoring previous memory policy: 4 00:04:39.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.557 EAL: request: mp_malloc_sync 00:04:39.557 EAL: No shared files mode enabled, IPC is disabled 00:04:39.557 EAL: Heap on socket 0 was expanded by 258MB 00:04:39.815 EAL: Calling mem event callback 'spdk:(nil)' 00:04:39.815 EAL: request: mp_malloc_sync 00:04:39.815 EAL: No shared files mode enabled, IPC is disabled 00:04:39.815 EAL: Heap on socket 0 was shrunk by 258MB 00:04:40.073 EAL: Trying to obtain current memory policy. 00:04:40.073 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:40.331 EAL: Restoring previous memory policy: 4 00:04:40.331 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.331 EAL: request: mp_malloc_sync 00:04:40.331 EAL: No shared files mode enabled, IPC is disabled 00:04:40.331 EAL: Heap on socket 0 was expanded by 514MB 00:04:40.896 EAL: Calling mem event callback 'spdk:(nil)' 00:04:40.896 EAL: request: mp_malloc_sync 00:04:40.896 EAL: No shared files mode enabled, IPC is disabled 00:04:40.896 EAL: Heap on socket 0 was shrunk by 514MB 00:04:41.462 EAL: Trying to obtain current memory policy. 00:04:41.462 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:41.721 EAL: Restoring previous memory policy: 4 00:04:41.721 EAL: Calling mem event callback 'spdk:(nil)' 00:04:41.721 EAL: request: mp_malloc_sync 00:04:41.721 EAL: No shared files mode enabled, IPC is disabled 00:04:41.721 EAL: Heap on socket 0 was expanded by 1026MB 00:04:42.680 EAL: Calling mem event callback 'spdk:(nil)' 00:04:42.937 EAL: request: mp_malloc_sync 00:04:42.937 EAL: No shared files mode enabled, IPC is disabled 00:04:42.937 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:44.317 passed 00:04:44.317 00:04:44.317 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.317 suites 1 1 n/a 0 0 00:04:44.317 tests 2 2 2 0 0 00:04:44.317 asserts 5698 5698 5698 0 n/a 00:04:44.317 00:04:44.317 Elapsed time = 5.404 seconds 00:04:44.317 EAL: Calling mem event callback 'spdk:(nil)' 00:04:44.317 EAL: request: mp_malloc_sync 00:04:44.317 EAL: No shared files mode enabled, IPC is disabled 00:04:44.317 EAL: Heap on socket 0 was shrunk by 2MB 00:04:44.317 EAL: No shared files mode enabled, IPC is disabled 00:04:44.317 EAL: No shared files mode enabled, IPC is disabled 00:04:44.317 EAL: No shared files mode enabled, IPC is disabled 00:04:44.317 00:04:44.317 real 0m5.677s 00:04:44.317 user 0m4.763s 00:04:44.317 sys 0m0.757s 00:04:44.317 10:04:49 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.317 10:04:49 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:44.317 ************************************ 00:04:44.317 END TEST env_vtophys 00:04:44.317 ************************************ 00:04:44.317 10:04:49 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.317 10:04:49 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.317 10:04:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.317 10:04:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.317 ************************************ 00:04:44.317 START TEST env_pci 00:04:44.317 ************************************ 00:04:44.317 10:04:49 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:44.317 00:04:44.317 00:04:44.317 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.317 http://cunit.sourceforge.net/ 00:04:44.317 00:04:44.317 00:04:44.317 Suite: pci 00:04:44.317 Test: pci_hook ...[2024-11-04 10:04:49.748556] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57005 has claimed it 00:04:44.317 passed 00:04:44.317 00:04:44.317 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.317 suites 1 1 n/a 0 0 00:04:44.317 tests 1 1 1 0 0 00:04:44.317 asserts 25 25 25 0 n/a 00:04:44.317 00:04:44.317 Elapsed time = 0.008 seconds 00:04:44.317 EAL: Cannot find device (10000:00:01.0) 00:04:44.317 EAL: Failed to attach device on primary process 00:04:44.317 00:04:44.317 real 0m0.069s 00:04:44.317 user 0m0.023s 00:04:44.317 sys 0m0.045s 00:04:44.317 10:04:49 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.317 10:04:49 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:44.317 ************************************ 00:04:44.317 END TEST env_pci 00:04:44.317 ************************************ 00:04:44.317 10:04:49 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:44.317 10:04:49 env -- env/env.sh@15 -- # uname 00:04:44.317 10:04:49 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:44.317 10:04:49 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:44.317 10:04:49 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.317 10:04:49 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:44.317 10:04:49 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.317 10:04:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.317 ************************************ 00:04:44.317 START TEST env_dpdk_post_init 00:04:44.317 ************************************ 00:04:44.317 10:04:49 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:44.317 EAL: Detected CPU lcores: 10 00:04:44.317 EAL: Detected NUMA nodes: 1 00:04:44.317 EAL: Detected shared linkage of DPDK 00:04:44.317 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.317 EAL: Selected IOVA mode 'PA' 00:04:44.317 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:44.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:44.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:44.317 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:44.575 Starting DPDK initialization... 00:04:44.575 Starting SPDK post initialization... 00:04:44.575 SPDK NVMe probe 00:04:44.575 Attaching to 0000:00:10.0 00:04:44.575 Attaching to 0000:00:11.0 00:04:44.575 Attaching to 0000:00:12.0 00:04:44.575 Attaching to 0000:00:13.0 00:04:44.575 Attached to 0000:00:10.0 00:04:44.575 Attached to 0000:00:11.0 00:04:44.575 Attached to 0000:00:13.0 00:04:44.575 Attached to 0000:00:12.0 00:04:44.575 Cleaning up... 00:04:44.575 00:04:44.575 real 0m0.249s 00:04:44.575 user 0m0.076s 00:04:44.575 sys 0m0.075s 00:04:44.575 10:04:50 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.575 10:04:50 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:44.575 ************************************ 00:04:44.575 END TEST env_dpdk_post_init 00:04:44.575 ************************************ 00:04:44.575 10:04:50 env -- env/env.sh@26 -- # uname 00:04:44.575 10:04:50 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:44.575 10:04:50 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.575 10:04:50 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.575 10:04:50 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.575 10:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.575 ************************************ 00:04:44.575 START TEST env_mem_callbacks 00:04:44.575 ************************************ 00:04:44.575 10:04:50 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:44.575 EAL: Detected CPU lcores: 10 00:04:44.575 EAL: Detected NUMA nodes: 1 00:04:44.575 EAL: Detected shared linkage of DPDK 00:04:44.575 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:44.575 EAL: Selected IOVA mode 'PA' 00:04:44.575 00:04:44.575 00:04:44.575 CUnit - A unit testing framework for C - Version 2.1-3 00:04:44.575 http://cunit.sourceforge.net/ 00:04:44.575 00:04:44.575 00:04:44.575 Suite: memory 00:04:44.575 Test: test ... 00:04:44.575 register 0x200000200000 2097152 00:04:44.575 malloc 3145728 00:04:44.575 register 0x200000400000 4194304 00:04:44.575 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:44.575 buf 0x2000004fffc0 len 3145728 PASSED 00:04:44.575 malloc 64 00:04:44.575 buf 0x2000004ffec0 len 64 PASSED 00:04:44.575 malloc 4194304 00:04:44.575 register 0x200000800000 6291456 00:04:44.575 buf 0x2000009fffc0 len 4194304 PASSED 00:04:44.575 free 0x2000004fffc0 3145728 00:04:44.575 free 0x2000004ffec0 64 00:04:44.575 unregister 0x200000400000 4194304 PASSED 00:04:44.575 free 0x2000009fffc0 4194304 00:04:44.575 unregister 0x200000800000 6291456 PASSED 00:04:44.575 malloc 8388608 00:04:44.575 register 0x200000400000 10485760 00:04:44.575 buf 0x2000005fffc0 len 8388608 PASSED 00:04:44.575 free 0x2000005fffc0 8388608 00:04:44.833 unregister 0x200000400000 10485760 PASSED 00:04:44.833 passed 00:04:44.833 00:04:44.833 Run Summary: Type Total Ran Passed Failed Inactive 00:04:44.833 suites 1 1 n/a 0 0 00:04:44.833 tests 1 1 1 0 0 00:04:44.833 asserts 15 15 15 0 n/a 00:04:44.833 00:04:44.833 Elapsed time = 0.041 seconds 00:04:44.833 00:04:44.833 real 0m0.204s 00:04:44.833 user 0m0.059s 00:04:44.833 sys 0m0.042s 00:04:44.833 10:04:50 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.833 10:04:50 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:44.833 ************************************ 00:04:44.833 END TEST env_mem_callbacks 00:04:44.833 ************************************ 00:04:44.833 ************************************ 00:04:44.833 END TEST env 00:04:44.833 ************************************ 00:04:44.833 00:04:44.833 real 0m6.817s 00:04:44.833 user 0m5.335s 00:04:44.833 sys 0m1.121s 00:04:44.833 10:04:50 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:44.833 10:04:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:44.833 10:04:50 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.833 10:04:50 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:44.833 10:04:50 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:44.833 10:04:50 -- common/autotest_common.sh@10 -- # set +x 00:04:44.833 ************************************ 00:04:44.833 START TEST rpc 00:04:44.833 ************************************ 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:44.833 * Looking for test storage... 00:04:44.833 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:44.833 10:04:50 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:44.833 10:04:50 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:44.833 10:04:50 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:44.833 10:04:50 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:44.833 10:04:50 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:44.833 10:04:50 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:44.833 10:04:50 rpc -- scripts/common.sh@345 -- # : 1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:44.833 10:04:50 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:44.833 10:04:50 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@353 -- # local d=1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:44.833 10:04:50 rpc -- scripts/common.sh@355 -- # echo 1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:44.833 10:04:50 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@353 -- # local d=2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:44.833 10:04:50 rpc -- scripts/common.sh@355 -- # echo 2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:44.833 10:04:50 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:44.833 10:04:50 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:44.833 10:04:50 rpc -- scripts/common.sh@368 -- # return 0 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.833 --rc genhtml_branch_coverage=1 00:04:44.833 --rc genhtml_function_coverage=1 00:04:44.833 --rc genhtml_legend=1 00:04:44.833 --rc geninfo_all_blocks=1 00:04:44.833 --rc geninfo_unexecuted_blocks=1 00:04:44.833 00:04:44.833 ' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.833 --rc genhtml_branch_coverage=1 00:04:44.833 --rc genhtml_function_coverage=1 00:04:44.833 --rc genhtml_legend=1 00:04:44.833 --rc geninfo_all_blocks=1 00:04:44.833 --rc geninfo_unexecuted_blocks=1 00:04:44.833 00:04:44.833 ' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.833 --rc genhtml_branch_coverage=1 00:04:44.833 --rc genhtml_function_coverage=1 00:04:44.833 --rc genhtml_legend=1 00:04:44.833 --rc geninfo_all_blocks=1 00:04:44.833 --rc geninfo_unexecuted_blocks=1 00:04:44.833 00:04:44.833 ' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:44.833 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:44.833 --rc genhtml_branch_coverage=1 00:04:44.833 --rc genhtml_function_coverage=1 00:04:44.833 --rc genhtml_legend=1 00:04:44.833 --rc geninfo_all_blocks=1 00:04:44.833 --rc geninfo_unexecuted_blocks=1 00:04:44.833 00:04:44.833 ' 00:04:44.833 10:04:50 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57127 00:04:44.833 10:04:50 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:44.833 10:04:50 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:44.833 10:04:50 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57127 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@833 -- # '[' -z 57127 ']' 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.833 10:04:50 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:44.834 10:04:50 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.091 [2024-11-04 10:04:50.629985] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:04:45.091 [2024-11-04 10:04:50.630302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57127 ] 00:04:45.091 [2024-11-04 10:04:50.790382] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:45.349 [2024-11-04 10:04:50.890335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:45.349 [2024-11-04 10:04:50.890394] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57127' to capture a snapshot of events at runtime. 00:04:45.349 [2024-11-04 10:04:50.890404] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:45.349 [2024-11-04 10:04:50.890414] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:45.349 [2024-11-04 10:04:50.890421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57127 for offline analysis/debug. 00:04:45.349 [2024-11-04 10:04:50.891297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.914 10:04:51 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:45.914 10:04:51 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:45.914 10:04:51 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.914 10:04:51 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:45.914 10:04:51 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:45.914 10:04:51 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:45.914 10:04:51 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:45.914 10:04:51 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:45.914 10:04:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 ************************************ 00:04:45.914 START TEST rpc_integrity 00:04:45.914 ************************************ 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:45.914 { 00:04:45.914 "name": "Malloc0", 00:04:45.914 "aliases": [ 00:04:45.914 "74ea13e1-f94c-4d23-8368-ce173df1efad" 00:04:45.914 ], 00:04:45.914 "product_name": "Malloc disk", 00:04:45.914 "block_size": 512, 00:04:45.914 "num_blocks": 16384, 00:04:45.914 "uuid": "74ea13e1-f94c-4d23-8368-ce173df1efad", 00:04:45.914 "assigned_rate_limits": { 00:04:45.914 "rw_ios_per_sec": 0, 00:04:45.914 "rw_mbytes_per_sec": 0, 00:04:45.914 "r_mbytes_per_sec": 0, 00:04:45.914 "w_mbytes_per_sec": 0 00:04:45.914 }, 00:04:45.914 "claimed": false, 00:04:45.914 "zoned": false, 00:04:45.914 "supported_io_types": { 00:04:45.914 "read": true, 00:04:45.914 "write": true, 00:04:45.914 "unmap": true, 00:04:45.914 "flush": true, 00:04:45.914 "reset": true, 00:04:45.914 "nvme_admin": false, 00:04:45.914 "nvme_io": false, 00:04:45.914 "nvme_io_md": false, 00:04:45.914 "write_zeroes": true, 00:04:45.914 "zcopy": true, 00:04:45.914 "get_zone_info": false, 00:04:45.914 "zone_management": false, 00:04:45.914 "zone_append": false, 00:04:45.914 "compare": false, 00:04:45.914 "compare_and_write": false, 00:04:45.914 "abort": true, 00:04:45.914 "seek_hole": false, 00:04:45.914 "seek_data": false, 00:04:45.914 "copy": true, 00:04:45.914 "nvme_iov_md": false 00:04:45.914 }, 00:04:45.914 "memory_domains": [ 00:04:45.914 { 00:04:45.914 "dma_device_id": "system", 00:04:45.914 "dma_device_type": 1 00:04:45.914 }, 00:04:45.914 { 00:04:45.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.914 "dma_device_type": 2 00:04:45.914 } 00:04:45.914 ], 00:04:45.914 "driver_specific": {} 00:04:45.914 } 00:04:45.914 ]' 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:45.914 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.914 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.914 [2024-11-04 10:04:51.608200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:45.914 [2024-11-04 10:04:51.608264] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:45.914 [2024-11-04 10:04:51.608292] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:45.914 [2024-11-04 10:04:51.608304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:45.914 [2024-11-04 10:04:51.610531] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:45.915 [2024-11-04 10:04:51.610683] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:45.915 Passthru0 00:04:45.915 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.915 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:45.915 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:45.915 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:45.915 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:45.915 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:45.915 { 00:04:45.915 "name": "Malloc0", 00:04:45.915 "aliases": [ 00:04:45.915 "74ea13e1-f94c-4d23-8368-ce173df1efad" 00:04:45.915 ], 00:04:45.915 "product_name": "Malloc disk", 00:04:45.915 "block_size": 512, 00:04:45.915 "num_blocks": 16384, 00:04:45.915 "uuid": "74ea13e1-f94c-4d23-8368-ce173df1efad", 00:04:45.915 "assigned_rate_limits": { 00:04:45.915 "rw_ios_per_sec": 0, 00:04:45.915 "rw_mbytes_per_sec": 0, 00:04:45.915 "r_mbytes_per_sec": 0, 00:04:45.915 "w_mbytes_per_sec": 0 00:04:45.915 }, 00:04:45.915 "claimed": true, 00:04:45.915 "claim_type": "exclusive_write", 00:04:45.915 "zoned": false, 00:04:45.915 "supported_io_types": { 00:04:45.915 "read": true, 00:04:45.915 "write": true, 00:04:45.915 "unmap": true, 00:04:45.915 "flush": true, 00:04:45.915 "reset": true, 00:04:45.915 "nvme_admin": false, 00:04:45.915 "nvme_io": false, 00:04:45.915 "nvme_io_md": false, 00:04:45.915 "write_zeroes": true, 00:04:45.915 "zcopy": true, 00:04:45.915 "get_zone_info": false, 00:04:45.915 "zone_management": false, 00:04:45.915 "zone_append": false, 00:04:45.915 "compare": false, 00:04:45.915 "compare_and_write": false, 00:04:45.915 "abort": true, 00:04:45.915 "seek_hole": false, 00:04:45.915 "seek_data": false, 00:04:45.915 "copy": true, 00:04:45.915 "nvme_iov_md": false 00:04:45.915 }, 00:04:45.915 "memory_domains": [ 00:04:45.915 { 00:04:45.915 "dma_device_id": "system", 00:04:45.915 "dma_device_type": 1 00:04:45.915 }, 00:04:45.915 { 00:04:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.915 "dma_device_type": 2 00:04:45.915 } 00:04:45.915 ], 00:04:45.915 "driver_specific": {} 00:04:45.915 }, 00:04:45.915 { 00:04:45.915 "name": "Passthru0", 00:04:45.915 "aliases": [ 00:04:45.915 "83f0895e-6353-59a9-951b-09d587c565b0" 00:04:45.915 ], 00:04:45.915 "product_name": "passthru", 00:04:45.915 "block_size": 512, 00:04:45.915 "num_blocks": 16384, 00:04:45.915 "uuid": "83f0895e-6353-59a9-951b-09d587c565b0", 00:04:45.915 "assigned_rate_limits": { 00:04:45.915 "rw_ios_per_sec": 0, 00:04:45.915 "rw_mbytes_per_sec": 0, 00:04:45.915 "r_mbytes_per_sec": 0, 00:04:45.915 "w_mbytes_per_sec": 0 00:04:45.915 }, 00:04:45.915 "claimed": false, 00:04:45.915 "zoned": false, 00:04:45.915 "supported_io_types": { 00:04:45.915 "read": true, 00:04:45.915 "write": true, 00:04:45.915 "unmap": true, 00:04:45.915 "flush": true, 00:04:45.915 "reset": true, 00:04:45.915 "nvme_admin": false, 00:04:45.915 "nvme_io": false, 00:04:45.915 "nvme_io_md": false, 00:04:45.915 "write_zeroes": true, 00:04:45.915 "zcopy": true, 00:04:45.915 "get_zone_info": false, 00:04:45.915 "zone_management": false, 00:04:45.915 "zone_append": false, 00:04:45.915 "compare": false, 00:04:45.915 "compare_and_write": false, 00:04:45.915 "abort": true, 00:04:45.915 "seek_hole": false, 00:04:45.915 "seek_data": false, 00:04:45.915 "copy": true, 00:04:45.915 "nvme_iov_md": false 00:04:45.915 }, 00:04:45.915 "memory_domains": [ 00:04:45.915 { 00:04:45.915 "dma_device_id": "system", 00:04:45.915 "dma_device_type": 1 00:04:45.915 }, 00:04:45.915 { 00:04:45.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:45.915 "dma_device_type": 2 00:04:45.915 } 00:04:45.915 ], 00:04:45.915 "driver_specific": { 00:04:45.915 "passthru": { 00:04:45.915 "name": "Passthru0", 00:04:45.915 "base_bdev_name": "Malloc0" 00:04:45.915 } 00:04:45.915 } 00:04:45.915 } 00:04:45.915 ]' 00:04:45.915 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.173 ************************************ 00:04:46.173 END TEST rpc_integrity 00:04:46.173 ************************************ 00:04:46.173 10:04:51 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.173 00:04:46.173 real 0m0.240s 00:04:46.173 user 0m0.126s 00:04:46.173 sys 0m0.035s 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.173 10:04:51 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.173 10:04:51 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:46.173 10:04:51 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.173 10:04:51 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.173 10:04:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.173 ************************************ 00:04:46.173 START TEST rpc_plugins 00:04:46.173 ************************************ 00:04:46.173 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:46.173 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:46.174 { 00:04:46.174 "name": "Malloc1", 00:04:46.174 "aliases": [ 00:04:46.174 "3959ddd1-378e-492c-ab4a-cc8097fd1697" 00:04:46.174 ], 00:04:46.174 "product_name": "Malloc disk", 00:04:46.174 "block_size": 4096, 00:04:46.174 "num_blocks": 256, 00:04:46.174 "uuid": "3959ddd1-378e-492c-ab4a-cc8097fd1697", 00:04:46.174 "assigned_rate_limits": { 00:04:46.174 "rw_ios_per_sec": 0, 00:04:46.174 "rw_mbytes_per_sec": 0, 00:04:46.174 "r_mbytes_per_sec": 0, 00:04:46.174 "w_mbytes_per_sec": 0 00:04:46.174 }, 00:04:46.174 "claimed": false, 00:04:46.174 "zoned": false, 00:04:46.174 "supported_io_types": { 00:04:46.174 "read": true, 00:04:46.174 "write": true, 00:04:46.174 "unmap": true, 00:04:46.174 "flush": true, 00:04:46.174 "reset": true, 00:04:46.174 "nvme_admin": false, 00:04:46.174 "nvme_io": false, 00:04:46.174 "nvme_io_md": false, 00:04:46.174 "write_zeroes": true, 00:04:46.174 "zcopy": true, 00:04:46.174 "get_zone_info": false, 00:04:46.174 "zone_management": false, 00:04:46.174 "zone_append": false, 00:04:46.174 "compare": false, 00:04:46.174 "compare_and_write": false, 00:04:46.174 "abort": true, 00:04:46.174 "seek_hole": false, 00:04:46.174 "seek_data": false, 00:04:46.174 "copy": true, 00:04:46.174 "nvme_iov_md": false 00:04:46.174 }, 00:04:46.174 "memory_domains": [ 00:04:46.174 { 00:04:46.174 "dma_device_id": "system", 00:04:46.174 "dma_device_type": 1 00:04:46.174 }, 00:04:46.174 { 00:04:46.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.174 "dma_device_type": 2 00:04:46.174 } 00:04:46.174 ], 00:04:46.174 "driver_specific": {} 00:04:46.174 } 00:04:46.174 ]' 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:46.174 ************************************ 00:04:46.174 END TEST rpc_plugins 00:04:46.174 ************************************ 00:04:46.174 10:04:51 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:46.174 00:04:46.174 real 0m0.114s 00:04:46.174 user 0m0.061s 00:04:46.174 sys 0m0.017s 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.174 10:04:51 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 10:04:51 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:46.433 10:04:51 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.433 10:04:51 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.433 10:04:51 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 ************************************ 00:04:46.433 START TEST rpc_trace_cmd_test 00:04:46.433 ************************************ 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:46.433 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57127", 00:04:46.433 "tpoint_group_mask": "0x8", 00:04:46.433 "iscsi_conn": { 00:04:46.433 "mask": "0x2", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "scsi": { 00:04:46.433 "mask": "0x4", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "bdev": { 00:04:46.433 "mask": "0x8", 00:04:46.433 "tpoint_mask": "0xffffffffffffffff" 00:04:46.433 }, 00:04:46.433 "nvmf_rdma": { 00:04:46.433 "mask": "0x10", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "nvmf_tcp": { 00:04:46.433 "mask": "0x20", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "ftl": { 00:04:46.433 "mask": "0x40", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "blobfs": { 00:04:46.433 "mask": "0x80", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "dsa": { 00:04:46.433 "mask": "0x200", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "thread": { 00:04:46.433 "mask": "0x400", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "nvme_pcie": { 00:04:46.433 "mask": "0x800", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "iaa": { 00:04:46.433 "mask": "0x1000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "nvme_tcp": { 00:04:46.433 "mask": "0x2000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "bdev_nvme": { 00:04:46.433 "mask": "0x4000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "sock": { 00:04:46.433 "mask": "0x8000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "blob": { 00:04:46.433 "mask": "0x10000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "bdev_raid": { 00:04:46.433 "mask": "0x20000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 }, 00:04:46.433 "scheduler": { 00:04:46.433 "mask": "0x40000", 00:04:46.433 "tpoint_mask": "0x0" 00:04:46.433 } 00:04:46.433 }' 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:46.433 10:04:51 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:46.433 ************************************ 00:04:46.433 END TEST rpc_trace_cmd_test 00:04:46.433 ************************************ 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:46.433 00:04:46.433 real 0m0.170s 00:04:46.433 user 0m0.147s 00:04:46.433 sys 0m0.016s 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.433 10:04:52 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 10:04:52 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:46.433 10:04:52 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:46.433 10:04:52 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:46.433 10:04:52 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:46.433 10:04:52 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:46.433 10:04:52 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 ************************************ 00:04:46.433 START TEST rpc_daemon_integrity 00:04:46.433 ************************************ 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.433 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:46.692 { 00:04:46.692 "name": "Malloc2", 00:04:46.692 "aliases": [ 00:04:46.692 "46feeac5-d2af-46e1-830e-bbb233eb14a6" 00:04:46.692 ], 00:04:46.692 "product_name": "Malloc disk", 00:04:46.692 "block_size": 512, 00:04:46.692 "num_blocks": 16384, 00:04:46.692 "uuid": "46feeac5-d2af-46e1-830e-bbb233eb14a6", 00:04:46.692 "assigned_rate_limits": { 00:04:46.692 "rw_ios_per_sec": 0, 00:04:46.692 "rw_mbytes_per_sec": 0, 00:04:46.692 "r_mbytes_per_sec": 0, 00:04:46.692 "w_mbytes_per_sec": 0 00:04:46.692 }, 00:04:46.692 "claimed": false, 00:04:46.692 "zoned": false, 00:04:46.692 "supported_io_types": { 00:04:46.692 "read": true, 00:04:46.692 "write": true, 00:04:46.692 "unmap": true, 00:04:46.692 "flush": true, 00:04:46.692 "reset": true, 00:04:46.692 "nvme_admin": false, 00:04:46.692 "nvme_io": false, 00:04:46.692 "nvme_io_md": false, 00:04:46.692 "write_zeroes": true, 00:04:46.692 "zcopy": true, 00:04:46.692 "get_zone_info": false, 00:04:46.692 "zone_management": false, 00:04:46.692 "zone_append": false, 00:04:46.692 "compare": false, 00:04:46.692 "compare_and_write": false, 00:04:46.692 "abort": true, 00:04:46.692 "seek_hole": false, 00:04:46.692 "seek_data": false, 00:04:46.692 "copy": true, 00:04:46.692 "nvme_iov_md": false 00:04:46.692 }, 00:04:46.692 "memory_domains": [ 00:04:46.692 { 00:04:46.692 "dma_device_id": "system", 00:04:46.692 "dma_device_type": 1 00:04:46.692 }, 00:04:46.692 { 00:04:46.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.692 "dma_device_type": 2 00:04:46.692 } 00:04:46.692 ], 00:04:46.692 "driver_specific": {} 00:04:46.692 } 00:04:46.692 ]' 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.692 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.692 [2024-11-04 10:04:52.243639] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:46.693 [2024-11-04 10:04:52.243701] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:46.693 [2024-11-04 10:04:52.243723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:46.693 [2024-11-04 10:04:52.243734] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:46.693 [2024-11-04 10:04:52.245924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:46.693 [2024-11-04 10:04:52.245965] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:46.693 Passthru0 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:46.693 { 00:04:46.693 "name": "Malloc2", 00:04:46.693 "aliases": [ 00:04:46.693 "46feeac5-d2af-46e1-830e-bbb233eb14a6" 00:04:46.693 ], 00:04:46.693 "product_name": "Malloc disk", 00:04:46.693 "block_size": 512, 00:04:46.693 "num_blocks": 16384, 00:04:46.693 "uuid": "46feeac5-d2af-46e1-830e-bbb233eb14a6", 00:04:46.693 "assigned_rate_limits": { 00:04:46.693 "rw_ios_per_sec": 0, 00:04:46.693 "rw_mbytes_per_sec": 0, 00:04:46.693 "r_mbytes_per_sec": 0, 00:04:46.693 "w_mbytes_per_sec": 0 00:04:46.693 }, 00:04:46.693 "claimed": true, 00:04:46.693 "claim_type": "exclusive_write", 00:04:46.693 "zoned": false, 00:04:46.693 "supported_io_types": { 00:04:46.693 "read": true, 00:04:46.693 "write": true, 00:04:46.693 "unmap": true, 00:04:46.693 "flush": true, 00:04:46.693 "reset": true, 00:04:46.693 "nvme_admin": false, 00:04:46.693 "nvme_io": false, 00:04:46.693 "nvme_io_md": false, 00:04:46.693 "write_zeroes": true, 00:04:46.693 "zcopy": true, 00:04:46.693 "get_zone_info": false, 00:04:46.693 "zone_management": false, 00:04:46.693 "zone_append": false, 00:04:46.693 "compare": false, 00:04:46.693 "compare_and_write": false, 00:04:46.693 "abort": true, 00:04:46.693 "seek_hole": false, 00:04:46.693 "seek_data": false, 00:04:46.693 "copy": true, 00:04:46.693 "nvme_iov_md": false 00:04:46.693 }, 00:04:46.693 "memory_domains": [ 00:04:46.693 { 00:04:46.693 "dma_device_id": "system", 00:04:46.693 "dma_device_type": 1 00:04:46.693 }, 00:04:46.693 { 00:04:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.693 "dma_device_type": 2 00:04:46.693 } 00:04:46.693 ], 00:04:46.693 "driver_specific": {} 00:04:46.693 }, 00:04:46.693 { 00:04:46.693 "name": "Passthru0", 00:04:46.693 "aliases": [ 00:04:46.693 "3e2e0ee7-6f5b-5676-99f2-2b4ef04a1a03" 00:04:46.693 ], 00:04:46.693 "product_name": "passthru", 00:04:46.693 "block_size": 512, 00:04:46.693 "num_blocks": 16384, 00:04:46.693 "uuid": "3e2e0ee7-6f5b-5676-99f2-2b4ef04a1a03", 00:04:46.693 "assigned_rate_limits": { 00:04:46.693 "rw_ios_per_sec": 0, 00:04:46.693 "rw_mbytes_per_sec": 0, 00:04:46.693 "r_mbytes_per_sec": 0, 00:04:46.693 "w_mbytes_per_sec": 0 00:04:46.693 }, 00:04:46.693 "claimed": false, 00:04:46.693 "zoned": false, 00:04:46.693 "supported_io_types": { 00:04:46.693 "read": true, 00:04:46.693 "write": true, 00:04:46.693 "unmap": true, 00:04:46.693 "flush": true, 00:04:46.693 "reset": true, 00:04:46.693 "nvme_admin": false, 00:04:46.693 "nvme_io": false, 00:04:46.693 "nvme_io_md": false, 00:04:46.693 "write_zeroes": true, 00:04:46.693 "zcopy": true, 00:04:46.693 "get_zone_info": false, 00:04:46.693 "zone_management": false, 00:04:46.693 "zone_append": false, 00:04:46.693 "compare": false, 00:04:46.693 "compare_and_write": false, 00:04:46.693 "abort": true, 00:04:46.693 "seek_hole": false, 00:04:46.693 "seek_data": false, 00:04:46.693 "copy": true, 00:04:46.693 "nvme_iov_md": false 00:04:46.693 }, 00:04:46.693 "memory_domains": [ 00:04:46.693 { 00:04:46.693 "dma_device_id": "system", 00:04:46.693 "dma_device_type": 1 00:04:46.693 }, 00:04:46.693 { 00:04:46.693 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:46.693 "dma_device_type": 2 00:04:46.693 } 00:04:46.693 ], 00:04:46.693 "driver_specific": { 00:04:46.693 "passthru": { 00:04:46.693 "name": "Passthru0", 00:04:46.693 "base_bdev_name": "Malloc2" 00:04:46.693 } 00:04:46.693 } 00:04:46.693 } 00:04:46.693 ]' 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:46.693 ************************************ 00:04:46.693 END TEST rpc_daemon_integrity 00:04:46.693 ************************************ 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:46.693 00:04:46.693 real 0m0.239s 00:04:46.693 user 0m0.121s 00:04:46.693 sys 0m0.038s 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:46.693 10:04:52 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 10:04:52 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:46.693 10:04:52 rpc -- rpc/rpc.sh@84 -- # killprocess 57127 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@952 -- # '[' -z 57127 ']' 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@956 -- # kill -0 57127 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@957 -- # uname 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57127 00:04:46.693 killing process with pid 57127 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57127' 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@971 -- # kill 57127 00:04:46.693 10:04:52 rpc -- common/autotest_common.sh@976 -- # wait 57127 00:04:48.597 ************************************ 00:04:48.597 END TEST rpc 00:04:48.597 ************************************ 00:04:48.597 00:04:48.597 real 0m3.528s 00:04:48.597 user 0m3.974s 00:04:48.597 sys 0m0.589s 00:04:48.597 10:04:53 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:48.597 10:04:53 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.597 10:04:53 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.597 10:04:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.597 10:04:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.597 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:48.597 ************************************ 00:04:48.597 START TEST skip_rpc 00:04:48.597 ************************************ 00:04:48.597 10:04:53 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:48.597 * Looking for test storage... 00:04:48.597 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.597 10:04:54 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.597 --rc genhtml_branch_coverage=1 00:04:48.597 --rc genhtml_function_coverage=1 00:04:48.597 --rc genhtml_legend=1 00:04:48.597 --rc geninfo_all_blocks=1 00:04:48.597 --rc geninfo_unexecuted_blocks=1 00:04:48.597 00:04:48.597 ' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.597 --rc genhtml_branch_coverage=1 00:04:48.597 --rc genhtml_function_coverage=1 00:04:48.597 --rc genhtml_legend=1 00:04:48.597 --rc geninfo_all_blocks=1 00:04:48.597 --rc geninfo_unexecuted_blocks=1 00:04:48.597 00:04:48.597 ' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.597 --rc genhtml_branch_coverage=1 00:04:48.597 --rc genhtml_function_coverage=1 00:04:48.597 --rc genhtml_legend=1 00:04:48.597 --rc geninfo_all_blocks=1 00:04:48.597 --rc geninfo_unexecuted_blocks=1 00:04:48.597 00:04:48.597 ' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.597 --rc genhtml_branch_coverage=1 00:04:48.597 --rc genhtml_function_coverage=1 00:04:48.597 --rc genhtml_legend=1 00:04:48.597 --rc geninfo_all_blocks=1 00:04:48.597 --rc geninfo_unexecuted_blocks=1 00:04:48.597 00:04:48.597 ' 00:04:48.597 10:04:54 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:48.597 10:04:54 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:48.597 10:04:54 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.597 10:04:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.597 ************************************ 00:04:48.597 START TEST skip_rpc 00:04:48.597 ************************************ 00:04:48.597 10:04:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:48.597 10:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57339 00:04:48.597 10:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.597 10:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:48.597 10:04:54 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:48.597 [2024-11-04 10:04:54.188622] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:04:48.597 [2024-11-04 10:04:54.188721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57339 ] 00:04:48.856 [2024-11-04 10:04:54.348157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.856 [2024-11-04 10:04:54.449832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57339 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57339 ']' 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57339 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57339 00:04:54.137 killing process with pid 57339 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57339' 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57339 00:04:54.137 10:04:59 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57339 00:04:54.703 00:04:54.703 real 0m6.256s 00:04:54.703 user 0m5.873s 00:04:54.703 sys 0m0.279s 00:04:54.703 10:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.703 10:05:00 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.703 ************************************ 00:04:54.703 END TEST skip_rpc 00:04:54.703 ************************************ 00:04:54.703 10:05:00 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:54.703 10:05:00 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.703 10:05:00 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.703 10:05:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:54.703 ************************************ 00:04:54.703 START TEST skip_rpc_with_json 00:04:54.703 ************************************ 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57432 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57432 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57432 ']' 00:04:54.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:54.703 10:05:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:54.960 [2024-11-04 10:05:00.486580] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:04:54.960 [2024-11-04 10:05:00.486833] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57432 ] 00:04:54.960 [2024-11-04 10:05:00.635833] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.218 [2024-11-04 10:05:00.722657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.782 [2024-11-04 10:05:01.349832] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:55.782 request: 00:04:55.782 { 00:04:55.782 "trtype": "tcp", 00:04:55.782 "method": "nvmf_get_transports", 00:04:55.782 "req_id": 1 00:04:55.782 } 00:04:55.782 Got JSON-RPC error response 00:04:55.782 response: 00:04:55.782 { 00:04:55.782 "code": -19, 00:04:55.782 "message": "No such device" 00:04:55.782 } 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.782 [2024-11-04 10:05:01.357931] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:55.782 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:55.782 { 00:04:55.782 "subsystems": [ 00:04:55.782 { 00:04:55.782 "subsystem": "fsdev", 00:04:55.782 "config": [ 00:04:55.782 { 00:04:55.782 "method": "fsdev_set_opts", 00:04:55.782 "params": { 00:04:55.782 "fsdev_io_pool_size": 65535, 00:04:55.782 "fsdev_io_cache_size": 256 00:04:55.782 } 00:04:55.782 } 00:04:55.782 ] 00:04:55.782 }, 00:04:55.782 { 00:04:55.782 "subsystem": "keyring", 00:04:55.782 "config": [] 00:04:55.782 }, 00:04:55.782 { 00:04:55.782 "subsystem": "iobuf", 00:04:55.782 "config": [ 00:04:55.782 { 00:04:55.782 "method": "iobuf_set_options", 00:04:55.782 "params": { 00:04:55.782 "small_pool_count": 8192, 00:04:55.782 "large_pool_count": 1024, 00:04:55.782 "small_bufsize": 8192, 00:04:55.782 "large_bufsize": 135168, 00:04:55.782 "enable_numa": false 00:04:55.782 } 00:04:55.782 } 00:04:55.782 ] 00:04:55.782 }, 00:04:55.782 { 00:04:55.782 "subsystem": "sock", 00:04:55.782 "config": [ 00:04:55.782 { 00:04:55.783 "method": "sock_set_default_impl", 00:04:55.783 "params": { 00:04:55.783 "impl_name": "posix" 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "sock_impl_set_options", 00:04:55.783 "params": { 00:04:55.783 "impl_name": "ssl", 00:04:55.783 "recv_buf_size": 4096, 00:04:55.783 "send_buf_size": 4096, 00:04:55.783 "enable_recv_pipe": true, 00:04:55.783 "enable_quickack": false, 00:04:55.783 "enable_placement_id": 0, 00:04:55.783 "enable_zerocopy_send_server": true, 00:04:55.783 "enable_zerocopy_send_client": false, 00:04:55.783 "zerocopy_threshold": 0, 00:04:55.783 "tls_version": 0, 00:04:55.783 "enable_ktls": false 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "sock_impl_set_options", 00:04:55.783 "params": { 00:04:55.783 "impl_name": "posix", 00:04:55.783 "recv_buf_size": 2097152, 00:04:55.783 "send_buf_size": 2097152, 00:04:55.783 "enable_recv_pipe": true, 00:04:55.783 "enable_quickack": false, 00:04:55.783 "enable_placement_id": 0, 00:04:55.783 "enable_zerocopy_send_server": true, 00:04:55.783 "enable_zerocopy_send_client": false, 00:04:55.783 "zerocopy_threshold": 0, 00:04:55.783 "tls_version": 0, 00:04:55.783 "enable_ktls": false 00:04:55.783 } 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "vmd", 00:04:55.783 "config": [] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "accel", 00:04:55.783 "config": [ 00:04:55.783 { 00:04:55.783 "method": "accel_set_options", 00:04:55.783 "params": { 00:04:55.783 "small_cache_size": 128, 00:04:55.783 "large_cache_size": 16, 00:04:55.783 "task_count": 2048, 00:04:55.783 "sequence_count": 2048, 00:04:55.783 "buf_count": 2048 00:04:55.783 } 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "bdev", 00:04:55.783 "config": [ 00:04:55.783 { 00:04:55.783 "method": "bdev_set_options", 00:04:55.783 "params": { 00:04:55.783 "bdev_io_pool_size": 65535, 00:04:55.783 "bdev_io_cache_size": 256, 00:04:55.783 "bdev_auto_examine": true, 00:04:55.783 "iobuf_small_cache_size": 128, 00:04:55.783 "iobuf_large_cache_size": 16 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "bdev_raid_set_options", 00:04:55.783 "params": { 00:04:55.783 "process_window_size_kb": 1024, 00:04:55.783 "process_max_bandwidth_mb_sec": 0 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "bdev_iscsi_set_options", 00:04:55.783 "params": { 00:04:55.783 "timeout_sec": 30 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "bdev_nvme_set_options", 00:04:55.783 "params": { 00:04:55.783 "action_on_timeout": "none", 00:04:55.783 "timeout_us": 0, 00:04:55.783 "timeout_admin_us": 0, 00:04:55.783 "keep_alive_timeout_ms": 10000, 00:04:55.783 "arbitration_burst": 0, 00:04:55.783 "low_priority_weight": 0, 00:04:55.783 "medium_priority_weight": 0, 00:04:55.783 "high_priority_weight": 0, 00:04:55.783 "nvme_adminq_poll_period_us": 10000, 00:04:55.783 "nvme_ioq_poll_period_us": 0, 00:04:55.783 "io_queue_requests": 0, 00:04:55.783 "delay_cmd_submit": true, 00:04:55.783 "transport_retry_count": 4, 00:04:55.783 "bdev_retry_count": 3, 00:04:55.783 "transport_ack_timeout": 0, 00:04:55.783 "ctrlr_loss_timeout_sec": 0, 00:04:55.783 "reconnect_delay_sec": 0, 00:04:55.783 "fast_io_fail_timeout_sec": 0, 00:04:55.783 "disable_auto_failback": false, 00:04:55.783 "generate_uuids": false, 00:04:55.783 "transport_tos": 0, 00:04:55.783 "nvme_error_stat": false, 00:04:55.783 "rdma_srq_size": 0, 00:04:55.783 "io_path_stat": false, 00:04:55.783 "allow_accel_sequence": false, 00:04:55.783 "rdma_max_cq_size": 0, 00:04:55.783 "rdma_cm_event_timeout_ms": 0, 00:04:55.783 "dhchap_digests": [ 00:04:55.783 "sha256", 00:04:55.783 "sha384", 00:04:55.783 "sha512" 00:04:55.783 ], 00:04:55.783 "dhchap_dhgroups": [ 00:04:55.783 "null", 00:04:55.783 "ffdhe2048", 00:04:55.783 "ffdhe3072", 00:04:55.783 "ffdhe4096", 00:04:55.783 "ffdhe6144", 00:04:55.783 "ffdhe8192" 00:04:55.783 ] 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "bdev_nvme_set_hotplug", 00:04:55.783 "params": { 00:04:55.783 "period_us": 100000, 00:04:55.783 "enable": false 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "bdev_wait_for_examine" 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "scsi", 00:04:55.783 "config": null 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "scheduler", 00:04:55.783 "config": [ 00:04:55.783 { 00:04:55.783 "method": "framework_set_scheduler", 00:04:55.783 "params": { 00:04:55.783 "name": "static" 00:04:55.783 } 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "vhost_scsi", 00:04:55.783 "config": [] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "vhost_blk", 00:04:55.783 "config": [] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "ublk", 00:04:55.783 "config": [] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "nbd", 00:04:55.783 "config": [] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "nvmf", 00:04:55.783 "config": [ 00:04:55.783 { 00:04:55.783 "method": "nvmf_set_config", 00:04:55.783 "params": { 00:04:55.783 "discovery_filter": "match_any", 00:04:55.783 "admin_cmd_passthru": { 00:04:55.783 "identify_ctrlr": false 00:04:55.783 }, 00:04:55.783 "dhchap_digests": [ 00:04:55.783 "sha256", 00:04:55.783 "sha384", 00:04:55.783 "sha512" 00:04:55.783 ], 00:04:55.783 "dhchap_dhgroups": [ 00:04:55.783 "null", 00:04:55.783 "ffdhe2048", 00:04:55.783 "ffdhe3072", 00:04:55.783 "ffdhe4096", 00:04:55.783 "ffdhe6144", 00:04:55.783 "ffdhe8192" 00:04:55.783 ] 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "nvmf_set_max_subsystems", 00:04:55.783 "params": { 00:04:55.783 "max_subsystems": 1024 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "nvmf_set_crdt", 00:04:55.783 "params": { 00:04:55.783 "crdt1": 0, 00:04:55.783 "crdt2": 0, 00:04:55.783 "crdt3": 0 00:04:55.783 } 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "method": "nvmf_create_transport", 00:04:55.783 "params": { 00:04:55.783 "trtype": "TCP", 00:04:55.783 "max_queue_depth": 128, 00:04:55.783 "max_io_qpairs_per_ctrlr": 127, 00:04:55.783 "in_capsule_data_size": 4096, 00:04:55.783 "max_io_size": 131072, 00:04:55.783 "io_unit_size": 131072, 00:04:55.783 "max_aq_depth": 128, 00:04:55.783 "num_shared_buffers": 511, 00:04:55.783 "buf_cache_size": 4294967295, 00:04:55.783 "dif_insert_or_strip": false, 00:04:55.783 "zcopy": false, 00:04:55.783 "c2h_success": true, 00:04:55.783 "sock_priority": 0, 00:04:55.783 "abort_timeout_sec": 1, 00:04:55.783 "ack_timeout": 0, 00:04:55.783 "data_wr_pool_size": 0 00:04:55.783 } 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 }, 00:04:55.783 { 00:04:55.783 "subsystem": "iscsi", 00:04:55.783 "config": [ 00:04:55.783 { 00:04:55.783 "method": "iscsi_set_options", 00:04:55.783 "params": { 00:04:55.783 "node_base": "iqn.2016-06.io.spdk", 00:04:55.783 "max_sessions": 128, 00:04:55.783 "max_connections_per_session": 2, 00:04:55.783 "max_queue_depth": 64, 00:04:55.783 "default_time2wait": 2, 00:04:55.783 "default_time2retain": 20, 00:04:55.783 "first_burst_length": 8192, 00:04:55.783 "immediate_data": true, 00:04:55.783 "allow_duplicated_isid": false, 00:04:55.783 "error_recovery_level": 0, 00:04:55.783 "nop_timeout": 60, 00:04:55.783 "nop_in_interval": 30, 00:04:55.783 "disable_chap": false, 00:04:55.783 "require_chap": false, 00:04:55.783 "mutual_chap": false, 00:04:55.783 "chap_group": 0, 00:04:55.783 "max_large_datain_per_connection": 64, 00:04:55.783 "max_r2t_per_connection": 4, 00:04:55.783 "pdu_pool_size": 36864, 00:04:55.783 "immediate_data_pool_size": 16384, 00:04:55.783 "data_out_pool_size": 2048 00:04:55.783 } 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 } 00:04:55.783 ] 00:04:55.783 } 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57432 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57432 ']' 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57432 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:55.783 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57432 00:04:56.040 killing process with pid 57432 00:04:56.040 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:56.040 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:56.040 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57432' 00:04:56.040 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57432 00:04:56.040 10:05:01 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57432 00:04:57.412 10:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57472 00:04:57.412 10:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:57.412 10:05:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57472 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57472 ']' 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57472 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57472 00:05:02.696 killing process with pid 57472 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57472' 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57472 00:05:02.696 10:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57472 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:03.629 ************************************ 00:05:03.629 END TEST skip_rpc_with_json 00:05:03.629 ************************************ 00:05:03.629 00:05:03.629 real 0m8.596s 00:05:03.629 user 0m8.219s 00:05:03.629 sys 0m0.613s 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.629 10:05:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.629 ************************************ 00:05:03.629 START TEST skip_rpc_with_delay 00:05:03.629 ************************************ 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:03.629 [2024-11-04 10:05:09.134511] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:03.629 00:05:03.629 real 0m0.125s 00:05:03.629 user 0m0.064s 00:05:03.629 sys 0m0.060s 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:03.629 ************************************ 00:05:03.629 END TEST skip_rpc_with_delay 00:05:03.629 ************************************ 00:05:03.629 10:05:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:03.629 10:05:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:03.629 10:05:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:03.629 10:05:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:03.629 10:05:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.629 ************************************ 00:05:03.629 START TEST exit_on_failed_rpc_init 00:05:03.629 ************************************ 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:03.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57589 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57589 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57589 ']' 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:03.629 10:05:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.629 [2024-11-04 10:05:09.315925] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:03.629 [2024-11-04 10:05:09.316061] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57589 ] 00:05:03.887 [2024-11-04 10:05:09.477020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.887 [2024-11-04 10:05:09.578625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:04.451 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:04.709 [2024-11-04 10:05:10.264112] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:04.709 [2024-11-04 10:05:10.264242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57607 ] 00:05:04.709 [2024-11-04 10:05:10.425090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.966 [2024-11-04 10:05:10.529739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.966 [2024-11-04 10:05:10.529859] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:04.966 [2024-11-04 10:05:10.529874] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:04.966 [2024-11-04 10:05:10.529888] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:05.223 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:05.223 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57589 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57589 ']' 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57589 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57589 00:05:05.224 killing process with pid 57589 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57589' 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57589 00:05:05.224 10:05:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57589 00:05:06.598 ************************************ 00:05:06.598 END TEST exit_on_failed_rpc_init 00:05:06.598 ************************************ 00:05:06.598 00:05:06.598 real 0m2.865s 00:05:06.598 user 0m3.158s 00:05:06.598 sys 0m0.449s 00:05:06.598 10:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.598 10:05:12 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.598 10:05:12 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.598 00:05:06.598 real 0m18.164s 00:05:06.598 user 0m17.440s 00:05:06.598 sys 0m1.586s 00:05:06.598 10:05:12 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.598 ************************************ 00:05:06.598 END TEST skip_rpc 00:05:06.598 ************************************ 00:05:06.598 10:05:12 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:06.598 10:05:12 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:06.598 10:05:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.598 10:05:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.598 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.598 ************************************ 00:05:06.598 START TEST rpc_client 00:05:06.598 ************************************ 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:06.598 * Looking for test storage... 00:05:06.598 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.598 10:05:12 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.598 --rc genhtml_branch_coverage=1 00:05:06.598 --rc genhtml_function_coverage=1 00:05:06.598 --rc genhtml_legend=1 00:05:06.598 --rc geninfo_all_blocks=1 00:05:06.598 --rc geninfo_unexecuted_blocks=1 00:05:06.598 00:05:06.598 ' 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.598 --rc genhtml_branch_coverage=1 00:05:06.598 --rc genhtml_function_coverage=1 00:05:06.598 --rc genhtml_legend=1 00:05:06.598 --rc geninfo_all_blocks=1 00:05:06.598 --rc geninfo_unexecuted_blocks=1 00:05:06.598 00:05:06.598 ' 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.598 --rc genhtml_branch_coverage=1 00:05:06.598 --rc genhtml_function_coverage=1 00:05:06.598 --rc genhtml_legend=1 00:05:06.598 --rc geninfo_all_blocks=1 00:05:06.598 --rc geninfo_unexecuted_blocks=1 00:05:06.598 00:05:06.598 ' 00:05:06.598 10:05:12 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.598 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.598 --rc genhtml_branch_coverage=1 00:05:06.598 --rc genhtml_function_coverage=1 00:05:06.598 --rc genhtml_legend=1 00:05:06.598 --rc geninfo_all_blocks=1 00:05:06.598 --rc geninfo_unexecuted_blocks=1 00:05:06.598 00:05:06.598 ' 00:05:06.598 10:05:12 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:06.857 OK 00:05:06.857 10:05:12 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:06.857 00:05:06.857 real 0m0.197s 00:05:06.857 user 0m0.106s 00:05:06.857 sys 0m0.098s 00:05:06.857 10:05:12 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.857 10:05:12 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:06.857 ************************************ 00:05:06.857 END TEST rpc_client 00:05:06.857 ************************************ 00:05:06.857 10:05:12 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:06.857 10:05:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.857 10:05:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.857 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.857 ************************************ 00:05:06.857 START TEST json_config 00:05:06.857 ************************************ 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:06.857 10:05:12 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:06.857 10:05:12 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:06.857 10:05:12 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:06.857 10:05:12 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:06.857 10:05:12 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:06.857 10:05:12 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:06.857 10:05:12 json_config -- scripts/common.sh@345 -- # : 1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:06.857 10:05:12 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:06.857 10:05:12 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@353 -- # local d=1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:06.857 10:05:12 json_config -- scripts/common.sh@355 -- # echo 1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:06.857 10:05:12 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@353 -- # local d=2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:06.857 10:05:12 json_config -- scripts/common.sh@355 -- # echo 2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:06.857 10:05:12 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:06.857 10:05:12 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:06.857 10:05:12 json_config -- scripts/common.sh@368 -- # return 0 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.857 --rc genhtml_branch_coverage=1 00:05:06.857 --rc genhtml_function_coverage=1 00:05:06.857 --rc genhtml_legend=1 00:05:06.857 --rc geninfo_all_blocks=1 00:05:06.857 --rc geninfo_unexecuted_blocks=1 00:05:06.857 00:05:06.857 ' 00:05:06.857 10:05:12 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:06.857 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.858 --rc genhtml_branch_coverage=1 00:05:06.858 --rc genhtml_function_coverage=1 00:05:06.858 --rc genhtml_legend=1 00:05:06.858 --rc geninfo_all_blocks=1 00:05:06.858 --rc geninfo_unexecuted_blocks=1 00:05:06.858 00:05:06.858 ' 00:05:06.858 10:05:12 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:06.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.858 --rc genhtml_branch_coverage=1 00:05:06.858 --rc genhtml_function_coverage=1 00:05:06.858 --rc genhtml_legend=1 00:05:06.858 --rc geninfo_all_blocks=1 00:05:06.858 --rc geninfo_unexecuted_blocks=1 00:05:06.858 00:05:06.858 ' 00:05:06.858 10:05:12 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:06.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:06.858 --rc genhtml_branch_coverage=1 00:05:06.858 --rc genhtml_function_coverage=1 00:05:06.858 --rc genhtml_legend=1 00:05:06.858 --rc geninfo_all_blocks=1 00:05:06.858 --rc geninfo_unexecuted_blocks=1 00:05:06.858 00:05:06.858 ' 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:06.858 10:05:12 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:06.858 10:05:12 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:06.858 10:05:12 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:06.858 10:05:12 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:06.858 10:05:12 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.858 10:05:12 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.858 10:05:12 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.858 10:05:12 json_config -- paths/export.sh@5 -- # export PATH 00:05:06.858 10:05:12 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@51 -- # : 0 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:06.858 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:06.858 10:05:12 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:06.858 WARNING: No tests are enabled so not running JSON configuration tests 00:05:06.858 10:05:12 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:06.858 00:05:06.858 real 0m0.141s 00:05:06.858 user 0m0.105s 00:05:06.858 sys 0m0.038s 00:05:06.858 10:05:12 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.858 10:05:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:06.858 ************************************ 00:05:06.858 END TEST json_config 00:05:06.858 ************************************ 00:05:06.858 10:05:12 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:06.858 10:05:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.858 10:05:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.858 10:05:12 -- common/autotest_common.sh@10 -- # set +x 00:05:06.858 ************************************ 00:05:06.858 START TEST json_config_extra_key 00:05:06.858 ************************************ 00:05:06.858 10:05:12 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:07.118 10:05:12 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.118 10:05:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.118 10:05:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.118 10:05:12 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.118 10:05:12 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.119 --rc genhtml_branch_coverage=1 00:05:07.119 --rc genhtml_function_coverage=1 00:05:07.119 --rc genhtml_legend=1 00:05:07.119 --rc geninfo_all_blocks=1 00:05:07.119 --rc geninfo_unexecuted_blocks=1 00:05:07.119 00:05:07.119 ' 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.119 --rc genhtml_branch_coverage=1 00:05:07.119 --rc genhtml_function_coverage=1 00:05:07.119 --rc genhtml_legend=1 00:05:07.119 --rc geninfo_all_blocks=1 00:05:07.119 --rc geninfo_unexecuted_blocks=1 00:05:07.119 00:05:07.119 ' 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.119 --rc genhtml_branch_coverage=1 00:05:07.119 --rc genhtml_function_coverage=1 00:05:07.119 --rc genhtml_legend=1 00:05:07.119 --rc geninfo_all_blocks=1 00:05:07.119 --rc geninfo_unexecuted_blocks=1 00:05:07.119 00:05:07.119 ' 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.119 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.119 --rc genhtml_branch_coverage=1 00:05:07.119 --rc genhtml_function_coverage=1 00:05:07.119 --rc genhtml_legend=1 00:05:07.119 --rc geninfo_all_blocks=1 00:05:07.119 --rc geninfo_unexecuted_blocks=1 00:05:07.119 00:05:07.119 ' 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=dbee7ee1-51db-4d57-88e5-df07b0d2c945 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:07.119 10:05:12 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:07.119 10:05:12 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.119 10:05:12 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.119 10:05:12 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.119 10:05:12 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:07.119 10:05:12 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:07.119 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:07.119 10:05:12 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:07.119 INFO: launching applications... 00:05:07.119 10:05:12 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57806 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:07.119 Waiting for target to run... 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57806 /var/tmp/spdk_tgt.sock 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57806 ']' 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:07.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.119 10:05:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:07.119 10:05:12 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:07.119 [2024-11-04 10:05:12.825200] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:07.120 [2024-11-04 10:05:12.825493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:05:07.703 [2024-11-04 10:05:13.143808] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.703 [2024-11-04 10:05:13.238187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.270 10:05:13 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.270 00:05:08.270 INFO: shutting down applications... 00:05:08.270 10:05:13 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:08.270 10:05:13 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:08.270 10:05:13 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57806 ]] 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57806 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57806 00:05:08.270 10:05:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:08.529 10:05:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:08.529 10:05:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:08.529 10:05:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57806 00:05:08.529 10:05:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.094 10:05:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.094 10:05:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.094 10:05:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57806 00:05:09.094 10:05:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:09.658 10:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:09.658 10:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:09.658 10:05:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57806 00:05:09.658 10:05:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57806 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:10.225 SPDK target shutdown done 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:10.225 10:05:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:10.225 Success 00:05:10.225 10:05:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:10.225 ************************************ 00:05:10.225 END TEST json_config_extra_key 00:05:10.225 ************************************ 00:05:10.225 00:05:10.225 real 0m3.190s 00:05:10.225 user 0m2.826s 00:05:10.225 sys 0m0.416s 00:05:10.225 10:05:15 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:10.225 10:05:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:10.225 10:05:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:10.225 10:05:15 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:10.225 10:05:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:10.225 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:10.225 ************************************ 00:05:10.225 START TEST alias_rpc 00:05:10.225 ************************************ 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:10.225 * Looking for test storage... 00:05:10.225 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.225 10:05:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:10.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.225 --rc genhtml_branch_coverage=1 00:05:10.225 --rc genhtml_function_coverage=1 00:05:10.225 --rc genhtml_legend=1 00:05:10.225 --rc geninfo_all_blocks=1 00:05:10.225 --rc geninfo_unexecuted_blocks=1 00:05:10.225 00:05:10.225 ' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:10.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.225 --rc genhtml_branch_coverage=1 00:05:10.225 --rc genhtml_function_coverage=1 00:05:10.225 --rc genhtml_legend=1 00:05:10.225 --rc geninfo_all_blocks=1 00:05:10.225 --rc geninfo_unexecuted_blocks=1 00:05:10.225 00:05:10.225 ' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:10.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.225 --rc genhtml_branch_coverage=1 00:05:10.225 --rc genhtml_function_coverage=1 00:05:10.225 --rc genhtml_legend=1 00:05:10.225 --rc geninfo_all_blocks=1 00:05:10.225 --rc geninfo_unexecuted_blocks=1 00:05:10.225 00:05:10.225 ' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:10.225 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.225 --rc genhtml_branch_coverage=1 00:05:10.225 --rc genhtml_function_coverage=1 00:05:10.225 --rc genhtml_legend=1 00:05:10.225 --rc geninfo_all_blocks=1 00:05:10.225 --rc geninfo_unexecuted_blocks=1 00:05:10.225 00:05:10.225 ' 00:05:10.225 10:05:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:10.225 10:05:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57899 00:05:10.225 10:05:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57899 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57899 ']' 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:10.225 10:05:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:10.225 10:05:15 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.484 [2024-11-04 10:05:16.029343] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:10.484 [2024-11-04 10:05:16.029647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57899 ] 00:05:10.484 [2024-11-04 10:05:16.190395] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.742 [2024-11-04 10:05:16.295988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.309 10:05:16 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:11.309 10:05:16 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:11.309 10:05:16 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:11.568 10:05:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57899 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57899 ']' 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57899 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57899 00:05:11.568 killing process with pid 57899 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57899' 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@971 -- # kill 57899 00:05:11.568 10:05:17 alias_rpc -- common/autotest_common.sh@976 -- # wait 57899 00:05:12.943 ************************************ 00:05:12.943 END TEST alias_rpc 00:05:12.943 ************************************ 00:05:12.943 00:05:12.943 real 0m2.872s 00:05:12.943 user 0m2.961s 00:05:12.943 sys 0m0.426s 00:05:12.943 10:05:18 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.943 10:05:18 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.201 10:05:18 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:13.201 10:05:18 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.201 10:05:18 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:13.201 10:05:18 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:13.201 10:05:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.201 ************************************ 00:05:13.201 START TEST spdkcli_tcp 00:05:13.201 ************************************ 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:13.202 * Looking for test storage... 00:05:13.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.202 10:05:18 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.202 --rc genhtml_branch_coverage=1 00:05:13.202 --rc genhtml_function_coverage=1 00:05:13.202 --rc genhtml_legend=1 00:05:13.202 --rc geninfo_all_blocks=1 00:05:13.202 --rc geninfo_unexecuted_blocks=1 00:05:13.202 00:05:13.202 ' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.202 --rc genhtml_branch_coverage=1 00:05:13.202 --rc genhtml_function_coverage=1 00:05:13.202 --rc genhtml_legend=1 00:05:13.202 --rc geninfo_all_blocks=1 00:05:13.202 --rc geninfo_unexecuted_blocks=1 00:05:13.202 00:05:13.202 ' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.202 --rc genhtml_branch_coverage=1 00:05:13.202 --rc genhtml_function_coverage=1 00:05:13.202 --rc genhtml_legend=1 00:05:13.202 --rc geninfo_all_blocks=1 00:05:13.202 --rc geninfo_unexecuted_blocks=1 00:05:13.202 00:05:13.202 ' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:13.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.202 --rc genhtml_branch_coverage=1 00:05:13.202 --rc genhtml_function_coverage=1 00:05:13.202 --rc genhtml_legend=1 00:05:13.202 --rc geninfo_all_blocks=1 00:05:13.202 --rc geninfo_unexecuted_blocks=1 00:05:13.202 00:05:13.202 ' 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57995 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.202 10:05:18 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57995 00:05:13.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57995 ']' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:13.202 10:05:18 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:13.461 [2024-11-04 10:05:18.950564] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:13.461 [2024-11-04 10:05:18.950683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57995 ] 00:05:13.461 [2024-11-04 10:05:19.111576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:13.719 [2024-11-04 10:05:19.214644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:13.719 [2024-11-04 10:05:19.214752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.284 10:05:19 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:14.284 10:05:19 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:14.284 10:05:19 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.284 10:05:19 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58012 00:05:14.284 10:05:19 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.545 [ 00:05:14.545 "bdev_malloc_delete", 00:05:14.545 "bdev_malloc_create", 00:05:14.545 "bdev_null_resize", 00:05:14.545 "bdev_null_delete", 00:05:14.545 "bdev_null_create", 00:05:14.545 "bdev_nvme_cuse_unregister", 00:05:14.545 "bdev_nvme_cuse_register", 00:05:14.545 "bdev_opal_new_user", 00:05:14.545 "bdev_opal_set_lock_state", 00:05:14.545 "bdev_opal_delete", 00:05:14.545 "bdev_opal_get_info", 00:05:14.545 "bdev_opal_create", 00:05:14.545 "bdev_nvme_opal_revert", 00:05:14.545 "bdev_nvme_opal_init", 00:05:14.545 "bdev_nvme_send_cmd", 00:05:14.545 "bdev_nvme_set_keys", 00:05:14.545 "bdev_nvme_get_path_iostat", 00:05:14.545 "bdev_nvme_get_mdns_discovery_info", 00:05:14.545 "bdev_nvme_stop_mdns_discovery", 00:05:14.545 "bdev_nvme_start_mdns_discovery", 00:05:14.545 "bdev_nvme_set_multipath_policy", 00:05:14.545 "bdev_nvme_set_preferred_path", 00:05:14.545 "bdev_nvme_get_io_paths", 00:05:14.545 "bdev_nvme_remove_error_injection", 00:05:14.545 "bdev_nvme_add_error_injection", 00:05:14.545 "bdev_nvme_get_discovery_info", 00:05:14.545 "bdev_nvme_stop_discovery", 00:05:14.545 "bdev_nvme_start_discovery", 00:05:14.545 "bdev_nvme_get_controller_health_info", 00:05:14.545 "bdev_nvme_disable_controller", 00:05:14.545 "bdev_nvme_enable_controller", 00:05:14.545 "bdev_nvme_reset_controller", 00:05:14.545 "bdev_nvme_get_transport_statistics", 00:05:14.545 "bdev_nvme_apply_firmware", 00:05:14.545 "bdev_nvme_detach_controller", 00:05:14.545 "bdev_nvme_get_controllers", 00:05:14.545 "bdev_nvme_attach_controller", 00:05:14.545 "bdev_nvme_set_hotplug", 00:05:14.545 "bdev_nvme_set_options", 00:05:14.545 "bdev_passthru_delete", 00:05:14.545 "bdev_passthru_create", 00:05:14.545 "bdev_lvol_set_parent_bdev", 00:05:14.545 "bdev_lvol_set_parent", 00:05:14.545 "bdev_lvol_check_shallow_copy", 00:05:14.545 "bdev_lvol_start_shallow_copy", 00:05:14.545 "bdev_lvol_grow_lvstore", 00:05:14.545 "bdev_lvol_get_lvols", 00:05:14.545 "bdev_lvol_get_lvstores", 00:05:14.545 "bdev_lvol_delete", 00:05:14.545 "bdev_lvol_set_read_only", 00:05:14.545 "bdev_lvol_resize", 00:05:14.545 "bdev_lvol_decouple_parent", 00:05:14.545 "bdev_lvol_inflate", 00:05:14.545 "bdev_lvol_rename", 00:05:14.545 "bdev_lvol_clone_bdev", 00:05:14.545 "bdev_lvol_clone", 00:05:14.545 "bdev_lvol_snapshot", 00:05:14.545 "bdev_lvol_create", 00:05:14.545 "bdev_lvol_delete_lvstore", 00:05:14.545 "bdev_lvol_rename_lvstore", 00:05:14.545 "bdev_lvol_create_lvstore", 00:05:14.545 "bdev_raid_set_options", 00:05:14.545 "bdev_raid_remove_base_bdev", 00:05:14.545 "bdev_raid_add_base_bdev", 00:05:14.545 "bdev_raid_delete", 00:05:14.545 "bdev_raid_create", 00:05:14.545 "bdev_raid_get_bdevs", 00:05:14.545 "bdev_error_inject_error", 00:05:14.545 "bdev_error_delete", 00:05:14.545 "bdev_error_create", 00:05:14.545 "bdev_split_delete", 00:05:14.545 "bdev_split_create", 00:05:14.545 "bdev_delay_delete", 00:05:14.545 "bdev_delay_create", 00:05:14.545 "bdev_delay_update_latency", 00:05:14.545 "bdev_zone_block_delete", 00:05:14.545 "bdev_zone_block_create", 00:05:14.545 "blobfs_create", 00:05:14.545 "blobfs_detect", 00:05:14.545 "blobfs_set_cache_size", 00:05:14.545 "bdev_xnvme_delete", 00:05:14.545 "bdev_xnvme_create", 00:05:14.545 "bdev_aio_delete", 00:05:14.545 "bdev_aio_rescan", 00:05:14.545 "bdev_aio_create", 00:05:14.545 "bdev_ftl_set_property", 00:05:14.545 "bdev_ftl_get_properties", 00:05:14.545 "bdev_ftl_get_stats", 00:05:14.545 "bdev_ftl_unmap", 00:05:14.545 "bdev_ftl_unload", 00:05:14.545 "bdev_ftl_delete", 00:05:14.545 "bdev_ftl_load", 00:05:14.545 "bdev_ftl_create", 00:05:14.545 "bdev_virtio_attach_controller", 00:05:14.545 "bdev_virtio_scsi_get_devices", 00:05:14.545 "bdev_virtio_detach_controller", 00:05:14.545 "bdev_virtio_blk_set_hotplug", 00:05:14.545 "bdev_iscsi_delete", 00:05:14.545 "bdev_iscsi_create", 00:05:14.545 "bdev_iscsi_set_options", 00:05:14.545 "accel_error_inject_error", 00:05:14.545 "ioat_scan_accel_module", 00:05:14.545 "dsa_scan_accel_module", 00:05:14.546 "iaa_scan_accel_module", 00:05:14.546 "keyring_file_remove_key", 00:05:14.546 "keyring_file_add_key", 00:05:14.546 "keyring_linux_set_options", 00:05:14.546 "fsdev_aio_delete", 00:05:14.546 "fsdev_aio_create", 00:05:14.546 "iscsi_get_histogram", 00:05:14.546 "iscsi_enable_histogram", 00:05:14.546 "iscsi_set_options", 00:05:14.546 "iscsi_get_auth_groups", 00:05:14.546 "iscsi_auth_group_remove_secret", 00:05:14.546 "iscsi_auth_group_add_secret", 00:05:14.546 "iscsi_delete_auth_group", 00:05:14.546 "iscsi_create_auth_group", 00:05:14.546 "iscsi_set_discovery_auth", 00:05:14.546 "iscsi_get_options", 00:05:14.546 "iscsi_target_node_request_logout", 00:05:14.546 "iscsi_target_node_set_redirect", 00:05:14.546 "iscsi_target_node_set_auth", 00:05:14.546 "iscsi_target_node_add_lun", 00:05:14.546 "iscsi_get_stats", 00:05:14.546 "iscsi_get_connections", 00:05:14.546 "iscsi_portal_group_set_auth", 00:05:14.546 "iscsi_start_portal_group", 00:05:14.546 "iscsi_delete_portal_group", 00:05:14.546 "iscsi_create_portal_group", 00:05:14.546 "iscsi_get_portal_groups", 00:05:14.546 "iscsi_delete_target_node", 00:05:14.546 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.546 "iscsi_target_node_add_pg_ig_maps", 00:05:14.546 "iscsi_create_target_node", 00:05:14.546 "iscsi_get_target_nodes", 00:05:14.546 "iscsi_delete_initiator_group", 00:05:14.546 "iscsi_initiator_group_remove_initiators", 00:05:14.546 "iscsi_initiator_group_add_initiators", 00:05:14.546 "iscsi_create_initiator_group", 00:05:14.546 "iscsi_get_initiator_groups", 00:05:14.546 "nvmf_set_crdt", 00:05:14.546 "nvmf_set_config", 00:05:14.546 "nvmf_set_max_subsystems", 00:05:14.546 "nvmf_stop_mdns_prr", 00:05:14.546 "nvmf_publish_mdns_prr", 00:05:14.546 "nvmf_subsystem_get_listeners", 00:05:14.546 "nvmf_subsystem_get_qpairs", 00:05:14.546 "nvmf_subsystem_get_controllers", 00:05:14.546 "nvmf_get_stats", 00:05:14.546 "nvmf_get_transports", 00:05:14.546 "nvmf_create_transport", 00:05:14.546 "nvmf_get_targets", 00:05:14.546 "nvmf_delete_target", 00:05:14.546 "nvmf_create_target", 00:05:14.546 "nvmf_subsystem_allow_any_host", 00:05:14.546 "nvmf_subsystem_set_keys", 00:05:14.546 "nvmf_subsystem_remove_host", 00:05:14.546 "nvmf_subsystem_add_host", 00:05:14.546 "nvmf_ns_remove_host", 00:05:14.546 "nvmf_ns_add_host", 00:05:14.546 "nvmf_subsystem_remove_ns", 00:05:14.546 "nvmf_subsystem_set_ns_ana_group", 00:05:14.546 "nvmf_subsystem_add_ns", 00:05:14.546 "nvmf_subsystem_listener_set_ana_state", 00:05:14.546 "nvmf_discovery_get_referrals", 00:05:14.546 "nvmf_discovery_remove_referral", 00:05:14.546 "nvmf_discovery_add_referral", 00:05:14.546 "nvmf_subsystem_remove_listener", 00:05:14.546 "nvmf_subsystem_add_listener", 00:05:14.546 "nvmf_delete_subsystem", 00:05:14.546 "nvmf_create_subsystem", 00:05:14.546 "nvmf_get_subsystems", 00:05:14.546 "env_dpdk_get_mem_stats", 00:05:14.546 "nbd_get_disks", 00:05:14.546 "nbd_stop_disk", 00:05:14.546 "nbd_start_disk", 00:05:14.546 "ublk_recover_disk", 00:05:14.546 "ublk_get_disks", 00:05:14.546 "ublk_stop_disk", 00:05:14.546 "ublk_start_disk", 00:05:14.546 "ublk_destroy_target", 00:05:14.546 "ublk_create_target", 00:05:14.546 "virtio_blk_create_transport", 00:05:14.546 "virtio_blk_get_transports", 00:05:14.546 "vhost_controller_set_coalescing", 00:05:14.546 "vhost_get_controllers", 00:05:14.546 "vhost_delete_controller", 00:05:14.546 "vhost_create_blk_controller", 00:05:14.546 "vhost_scsi_controller_remove_target", 00:05:14.546 "vhost_scsi_controller_add_target", 00:05:14.546 "vhost_start_scsi_controller", 00:05:14.546 "vhost_create_scsi_controller", 00:05:14.546 "thread_set_cpumask", 00:05:14.546 "scheduler_set_options", 00:05:14.546 "framework_get_governor", 00:05:14.546 "framework_get_scheduler", 00:05:14.546 "framework_set_scheduler", 00:05:14.546 "framework_get_reactors", 00:05:14.546 "thread_get_io_channels", 00:05:14.546 "thread_get_pollers", 00:05:14.546 "thread_get_stats", 00:05:14.546 "framework_monitor_context_switch", 00:05:14.546 "spdk_kill_instance", 00:05:14.546 "log_enable_timestamps", 00:05:14.546 "log_get_flags", 00:05:14.546 "log_clear_flag", 00:05:14.546 "log_set_flag", 00:05:14.546 "log_get_level", 00:05:14.546 "log_set_level", 00:05:14.546 "log_get_print_level", 00:05:14.546 "log_set_print_level", 00:05:14.546 "framework_enable_cpumask_locks", 00:05:14.546 "framework_disable_cpumask_locks", 00:05:14.546 "framework_wait_init", 00:05:14.546 "framework_start_init", 00:05:14.546 "scsi_get_devices", 00:05:14.546 "bdev_get_histogram", 00:05:14.546 "bdev_enable_histogram", 00:05:14.546 "bdev_set_qos_limit", 00:05:14.546 "bdev_set_qd_sampling_period", 00:05:14.546 "bdev_get_bdevs", 00:05:14.546 "bdev_reset_iostat", 00:05:14.546 "bdev_get_iostat", 00:05:14.546 "bdev_examine", 00:05:14.546 "bdev_wait_for_examine", 00:05:14.546 "bdev_set_options", 00:05:14.546 "accel_get_stats", 00:05:14.546 "accel_set_options", 00:05:14.546 "accel_set_driver", 00:05:14.546 "accel_crypto_key_destroy", 00:05:14.546 "accel_crypto_keys_get", 00:05:14.546 "accel_crypto_key_create", 00:05:14.546 "accel_assign_opc", 00:05:14.546 "accel_get_module_info", 00:05:14.546 "accel_get_opc_assignments", 00:05:14.546 "vmd_rescan", 00:05:14.546 "vmd_remove_device", 00:05:14.546 "vmd_enable", 00:05:14.546 "sock_get_default_impl", 00:05:14.546 "sock_set_default_impl", 00:05:14.546 "sock_impl_set_options", 00:05:14.546 "sock_impl_get_options", 00:05:14.546 "iobuf_get_stats", 00:05:14.546 "iobuf_set_options", 00:05:14.546 "keyring_get_keys", 00:05:14.546 "framework_get_pci_devices", 00:05:14.546 "framework_get_config", 00:05:14.546 "framework_get_subsystems", 00:05:14.546 "fsdev_set_opts", 00:05:14.546 "fsdev_get_opts", 00:05:14.546 "trace_get_info", 00:05:14.546 "trace_get_tpoint_group_mask", 00:05:14.546 "trace_disable_tpoint_group", 00:05:14.546 "trace_enable_tpoint_group", 00:05:14.546 "trace_clear_tpoint_mask", 00:05:14.546 "trace_set_tpoint_mask", 00:05:14.546 "notify_get_notifications", 00:05:14.546 "notify_get_types", 00:05:14.546 "spdk_get_version", 00:05:14.546 "rpc_get_methods" 00:05:14.546 ] 00:05:14.546 10:05:20 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:14.546 10:05:20 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.546 10:05:20 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57995 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57995 ']' 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57995 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57995 00:05:14.546 killing process with pid 57995 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57995' 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57995 00:05:14.546 10:05:20 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57995 00:05:15.922 ************************************ 00:05:15.922 END TEST spdkcli_tcp 00:05:15.922 ************************************ 00:05:15.922 00:05:15.922 real 0m2.875s 00:05:15.922 user 0m5.179s 00:05:15.922 sys 0m0.456s 00:05:15.922 10:05:21 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:15.922 10:05:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 10:05:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.922 10:05:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:15.922 10:05:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:15.922 10:05:21 -- common/autotest_common.sh@10 -- # set +x 00:05:15.922 ************************************ 00:05:15.922 START TEST dpdk_mem_utility 00:05:15.922 ************************************ 00:05:15.922 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.181 * Looking for test storage... 00:05:16.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:16.181 10:05:21 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:16.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:16.181 --rc genhtml_branch_coverage=1 00:05:16.181 --rc genhtml_function_coverage=1 00:05:16.181 --rc genhtml_legend=1 00:05:16.181 --rc geninfo_all_blocks=1 00:05:16.181 --rc geninfo_unexecuted_blocks=1 00:05:16.181 00:05:16.181 ' 00:05:16.181 10:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:16.181 10:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58100 00:05:16.181 10:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58100 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58100 ']' 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.181 10:05:21 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:16.181 10:05:21 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.181 [2024-11-04 10:05:21.849142] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:16.181 [2024-11-04 10:05:21.849410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58100 ] 00:05:16.439 [2024-11-04 10:05:22.008470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.439 [2024-11-04 10:05:22.109136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.004 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:17.004 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:17.004 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.004 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.004 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:17.004 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.004 { 00:05:17.004 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.004 } 00:05:17.004 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:17.004 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:17.264 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:17.264 1 heaps totaling size 816.000000 MiB 00:05:17.264 size: 816.000000 MiB heap id: 0 00:05:17.264 end heaps---------- 00:05:17.264 9 mempools totaling size 595.772034 MiB 00:05:17.264 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.264 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.264 size: 92.545471 MiB name: bdev_io_58100 00:05:17.264 size: 50.003479 MiB name: msgpool_58100 00:05:17.264 size: 36.509338 MiB name: fsdev_io_58100 00:05:17.264 size: 21.763794 MiB name: PDU_Pool 00:05:17.264 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.264 size: 4.133484 MiB name: evtpool_58100 00:05:17.264 size: 0.026123 MiB name: Session_Pool 00:05:17.264 end mempools------- 00:05:17.264 6 memzones totaling size 4.142822 MiB 00:05:17.264 size: 1.000366 MiB name: RG_ring_0_58100 00:05:17.264 size: 1.000366 MiB name: RG_ring_1_58100 00:05:17.264 size: 1.000366 MiB name: RG_ring_4_58100 00:05:17.264 size: 1.000366 MiB name: RG_ring_5_58100 00:05:17.264 size: 0.125366 MiB name: RG_ring_2_58100 00:05:17.264 size: 0.015991 MiB name: RG_ring_3_58100 00:05:17.264 end memzones------- 00:05:17.264 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.264 heap id: 0 total size: 816.000000 MiB number of busy elements: 313 number of free elements: 18 00:05:17.264 list of free elements. size: 16.791870 MiB 00:05:17.264 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:17.264 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:17.264 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:17.264 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:17.264 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:17.264 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:17.264 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:17.264 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:17.264 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:17.264 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:17.264 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:17.264 element at address: 0x20001ac00000 with size: 0.561218 MiB 00:05:17.264 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:17.264 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:17.264 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:17.264 element at address: 0x200012c00000 with size: 0.443237 MiB 00:05:17.264 element at address: 0x200028000000 with size: 0.391663 MiB 00:05:17.264 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:17.264 list of standard malloc elements. size: 199.287231 MiB 00:05:17.264 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:17.264 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:17.264 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:17.264 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:17.264 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:17.264 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:17.264 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:17.264 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:17.264 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:17.264 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:17.264 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:17.264 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:17.264 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71780 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:17.265 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:17.266 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:17.266 element at address: 0x200028064540 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b200 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:17.266 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:17.266 list of memzone associated elements. size: 599.920898 MiB 00:05:17.266 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:17.266 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.266 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:17.266 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.266 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:17.266 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58100_0 00:05:17.266 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:17.266 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58100_0 00:05:17.266 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:17.266 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58100_0 00:05:17.267 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:17.267 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.267 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:17.267 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.267 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:17.267 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58100_0 00:05:17.267 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:17.267 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58100 00:05:17.267 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:17.267 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58100 00:05:17.267 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:17.267 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.267 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:17.267 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.267 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:17.267 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.267 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:17.267 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.267 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:17.267 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58100 00:05:17.267 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:17.267 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58100 00:05:17.267 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:17.267 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58100 00:05:17.267 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:17.267 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58100 00:05:17.267 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:17.267 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58100 00:05:17.267 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:17.267 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58100 00:05:17.267 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:17.267 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.267 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:17.267 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.267 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:17.267 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.267 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:17.267 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58100 00:05:17.267 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:17.267 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58100 00:05:17.267 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:17.267 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.267 element at address: 0x200028064640 with size: 0.023804 MiB 00:05:17.267 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.267 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:17.267 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58100 00:05:17.267 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:05:17.267 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.267 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:17.267 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58100 00:05:17.267 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:17.267 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58100 00:05:17.267 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:17.267 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58100 00:05:17.267 element at address: 0x20002806b300 with size: 0.000366 MiB 00:05:17.267 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.267 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.267 10:05:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58100 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58100 ']' 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58100 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58100 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58100' 00:05:17.267 killing process with pid 58100 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58100 00:05:17.267 10:05:22 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58100 00:05:18.674 00:05:18.674 real 0m2.734s 00:05:18.674 user 0m2.733s 00:05:18.674 sys 0m0.418s 00:05:18.674 10:05:24 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.674 ************************************ 00:05:18.674 END TEST dpdk_mem_utility 00:05:18.674 ************************************ 00:05:18.674 10:05:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.674 10:05:24 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.674 10:05:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.674 10:05:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.674 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.674 ************************************ 00:05:18.674 START TEST event 00:05:18.934 ************************************ 00:05:18.934 10:05:24 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:18.934 * Looking for test storage... 00:05:18.935 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.935 10:05:24 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.935 10:05:24 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.935 10:05:24 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.935 10:05:24 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.935 10:05:24 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.935 10:05:24 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.935 10:05:24 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.935 10:05:24 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.935 10:05:24 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.935 10:05:24 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.935 10:05:24 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.935 10:05:24 event -- scripts/common.sh@344 -- # case "$op" in 00:05:18.935 10:05:24 event -- scripts/common.sh@345 -- # : 1 00:05:18.935 10:05:24 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.935 10:05:24 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.935 10:05:24 event -- scripts/common.sh@365 -- # decimal 1 00:05:18.935 10:05:24 event -- scripts/common.sh@353 -- # local d=1 00:05:18.935 10:05:24 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.935 10:05:24 event -- scripts/common.sh@355 -- # echo 1 00:05:18.935 10:05:24 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.935 10:05:24 event -- scripts/common.sh@366 -- # decimal 2 00:05:18.935 10:05:24 event -- scripts/common.sh@353 -- # local d=2 00:05:18.935 10:05:24 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.935 10:05:24 event -- scripts/common.sh@355 -- # echo 2 00:05:18.935 10:05:24 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.935 10:05:24 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.935 10:05:24 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.935 10:05:24 event -- scripts/common.sh@368 -- # return 0 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.935 --rc genhtml_branch_coverage=1 00:05:18.935 --rc genhtml_function_coverage=1 00:05:18.935 --rc genhtml_legend=1 00:05:18.935 --rc geninfo_all_blocks=1 00:05:18.935 --rc geninfo_unexecuted_blocks=1 00:05:18.935 00:05:18.935 ' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.935 --rc genhtml_branch_coverage=1 00:05:18.935 --rc genhtml_function_coverage=1 00:05:18.935 --rc genhtml_legend=1 00:05:18.935 --rc geninfo_all_blocks=1 00:05:18.935 --rc geninfo_unexecuted_blocks=1 00:05:18.935 00:05:18.935 ' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.935 --rc genhtml_branch_coverage=1 00:05:18.935 --rc genhtml_function_coverage=1 00:05:18.935 --rc genhtml_legend=1 00:05:18.935 --rc geninfo_all_blocks=1 00:05:18.935 --rc geninfo_unexecuted_blocks=1 00:05:18.935 00:05:18.935 ' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.935 --rc genhtml_branch_coverage=1 00:05:18.935 --rc genhtml_function_coverage=1 00:05:18.935 --rc genhtml_legend=1 00:05:18.935 --rc geninfo_all_blocks=1 00:05:18.935 --rc geninfo_unexecuted_blocks=1 00:05:18.935 00:05:18.935 ' 00:05:18.935 10:05:24 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:18.935 10:05:24 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.935 10:05:24 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:18.935 10:05:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.935 10:05:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.935 ************************************ 00:05:18.935 START TEST event_perf 00:05:18.935 ************************************ 00:05:18.935 10:05:24 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.935 Running I/O for 1 seconds...[2024-11-04 10:05:24.582950] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:18.935 [2024-11-04 10:05:24.583389] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58192 ] 00:05:19.195 [2024-11-04 10:05:24.744143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:19.195 [2024-11-04 10:05:24.849520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:19.195 [2024-11-04 10:05:24.849651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:19.195 [2024-11-04 10:05:24.849687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.195 Running I/O for 1 seconds...[2024-11-04 10:05:24.849695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.571 00:05:20.571 lcore 0: 198619 00:05:20.571 lcore 1: 198618 00:05:20.571 lcore 2: 198618 00:05:20.571 lcore 3: 198616 00:05:20.571 done. 00:05:20.571 00:05:20.571 real 0m1.468s 00:05:20.571 user 0m4.258s 00:05:20.571 sys 0m0.082s 00:05:20.571 10:05:26 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:20.571 10:05:26 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.571 ************************************ 00:05:20.571 END TEST event_perf 00:05:20.571 ************************************ 00:05:20.571 10:05:26 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.571 10:05:26 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:20.571 10:05:26 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:20.571 10:05:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.571 ************************************ 00:05:20.571 START TEST event_reactor 00:05:20.571 ************************************ 00:05:20.571 10:05:26 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:20.571 [2024-11-04 10:05:26.090429] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:20.571 [2024-11-04 10:05:26.090544] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58237 ] 00:05:20.571 [2024-11-04 10:05:26.250718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.829 [2024-11-04 10:05:26.351794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.762 test_start 00:05:21.762 oneshot 00:05:21.762 tick 100 00:05:21.762 tick 100 00:05:21.762 tick 250 00:05:21.762 tick 100 00:05:21.762 tick 100 00:05:21.762 tick 100 00:05:21.762 tick 250 00:05:21.762 tick 500 00:05:21.763 tick 100 00:05:21.763 tick 100 00:05:21.763 tick 250 00:05:21.763 tick 100 00:05:21.763 tick 100 00:05:21.763 test_end 00:05:21.763 00:05:21.763 real 0m1.441s 00:05:21.763 user 0m1.266s 00:05:21.763 sys 0m0.068s 00:05:21.763 ************************************ 00:05:21.763 END TEST event_reactor 00:05:21.763 ************************************ 00:05:21.763 10:05:27 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.763 10:05:27 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:22.021 10:05:27 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.021 10:05:27 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:22.021 10:05:27 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:22.021 10:05:27 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.021 ************************************ 00:05:22.021 START TEST event_reactor_perf 00:05:22.021 ************************************ 00:05:22.021 10:05:27 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:22.021 [2024-11-04 10:05:27.574623] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:22.021 [2024-11-04 10:05:27.574745] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58268 ] 00:05:22.021 [2024-11-04 10:05:27.734884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.279 [2024-11-04 10:05:27.836940] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.655 test_start 00:05:23.655 test_end 00:05:23.655 Performance: 315802 events per second 00:05:23.655 00:05:23.655 real 0m1.452s 00:05:23.655 user 0m1.275s 00:05:23.655 sys 0m0.069s 00:05:23.655 10:05:28 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:23.655 ************************************ 00:05:23.655 END TEST event_reactor_perf 00:05:23.655 ************************************ 00:05:23.655 10:05:28 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:23.655 10:05:29 event -- event/event.sh@49 -- # uname -s 00:05:23.655 10:05:29 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:23.655 10:05:29 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.655 10:05:29 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:23.655 10:05:29 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:23.655 10:05:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:23.655 ************************************ 00:05:23.655 START TEST event_scheduler 00:05:23.655 ************************************ 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:23.655 * Looking for test storage... 00:05:23.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:23.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:23.655 10:05:29 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:23.655 10:05:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:23.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.655 --rc genhtml_branch_coverage=1 00:05:23.655 --rc genhtml_function_coverage=1 00:05:23.656 --rc genhtml_legend=1 00:05:23.656 --rc geninfo_all_blocks=1 00:05:23.656 --rc geninfo_unexecuted_blocks=1 00:05:23.656 00:05:23.656 ' 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.656 --rc genhtml_branch_coverage=1 00:05:23.656 --rc genhtml_function_coverage=1 00:05:23.656 --rc genhtml_legend=1 00:05:23.656 --rc geninfo_all_blocks=1 00:05:23.656 --rc geninfo_unexecuted_blocks=1 00:05:23.656 00:05:23.656 ' 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.656 --rc genhtml_branch_coverage=1 00:05:23.656 --rc genhtml_function_coverage=1 00:05:23.656 --rc genhtml_legend=1 00:05:23.656 --rc geninfo_all_blocks=1 00:05:23.656 --rc geninfo_unexecuted_blocks=1 00:05:23.656 00:05:23.656 ' 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:23.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:23.656 --rc genhtml_branch_coverage=1 00:05:23.656 --rc genhtml_function_coverage=1 00:05:23.656 --rc genhtml_legend=1 00:05:23.656 --rc geninfo_all_blocks=1 00:05:23.656 --rc geninfo_unexecuted_blocks=1 00:05:23.656 00:05:23.656 ' 00:05:23.656 10:05:29 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:23.656 10:05:29 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58344 00:05:23.656 10:05:29 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.656 10:05:29 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58344 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58344 ']' 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.656 10:05:29 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:23.656 10:05:29 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.656 [2024-11-04 10:05:29.273234] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:23.656 [2024-11-04 10:05:29.273866] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58344 ] 00:05:23.914 [2024-11-04 10:05:29.431253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:23.914 [2024-11-04 10:05:29.521804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.914 [2024-11-04 10:05:29.521813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.914 [2024-11-04 10:05:29.522139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.914 [2024-11-04 10:05:29.521871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.479 10:05:30 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:24.479 10:05:30 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:24.479 10:05:30 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:24.479 10:05:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.479 10:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.479 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.479 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.479 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.479 POWER: Cannot set governor of lcore 0 to performance 00:05:24.479 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.479 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.480 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:24.480 POWER: Cannot set governor of lcore 0 to userspace 00:05:24.480 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:24.480 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:24.480 POWER: Unable to set Power Management Environment for lcore 0 00:05:24.480 [2024-11-04 10:05:30.128611] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:24.480 [2024-11-04 10:05:30.128642] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:24.480 [2024-11-04 10:05:30.128722] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:24.480 [2024-11-04 10:05:30.128742] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:24.480 [2024-11-04 10:05:30.128749] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:24.480 [2024-11-04 10:05:30.128756] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:24.480 10:05:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.480 10:05:30 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:24.480 10:05:30 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.480 10:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 [2024-11-04 10:05:30.314033] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:24.738 10:05:30 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.738 10:05:30 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:24.738 10:05:30 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:24.738 10:05:30 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:24.738 10:05:30 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 ************************************ 00:05:24.738 START TEST scheduler_create_thread 00:05:24.738 ************************************ 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 2 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 3 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 4 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.738 5 00:05:24.738 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 6 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 7 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 8 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 9 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 10 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.739 10:05:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.152 ************************************ 00:05:26.152 END TEST scheduler_create_thread 00:05:26.152 ************************************ 00:05:26.152 10:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:26.152 00:05:26.152 real 0m1.172s 00:05:26.152 user 0m0.012s 00:05:26.152 sys 0m0.007s 00:05:26.152 10:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.152 10:05:31 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.152 10:05:31 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.152 10:05:31 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58344 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58344 ']' 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58344 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58344 00:05:26.152 killing process with pid 58344 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58344' 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58344 00:05:26.152 10:05:31 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58344 00:05:26.411 [2024-11-04 10:05:31.978833] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.977 ************************************ 00:05:26.977 END TEST event_scheduler 00:05:26.977 ************************************ 00:05:26.977 00:05:26.977 real 0m3.528s 00:05:26.977 user 0m5.831s 00:05:26.977 sys 0m0.360s 00:05:26.977 10:05:32 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:26.977 10:05:32 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.977 10:05:32 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.977 10:05:32 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.977 10:05:32 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:26.977 10:05:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:26.977 10:05:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.977 ************************************ 00:05:26.977 START TEST app_repeat 00:05:26.977 ************************************ 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.977 Process app_repeat pid: 58428 00:05:26.977 spdk_app_start Round 0 00:05:26.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58428 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58428' 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58428 ']' 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.977 10:05:32 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:26.977 10:05:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.977 [2024-11-04 10:05:32.671444] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:26.977 [2024-11-04 10:05:32.671544] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58428 ] 00:05:27.235 [2024-11-04 10:05:32.826747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:27.235 [2024-11-04 10:05:32.932719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.235 [2024-11-04 10:05:32.932842] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.804 10:05:33 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:27.804 10:05:33 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:27.804 10:05:33 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.073 Malloc0 00:05:28.073 10:05:33 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:28.331 Malloc1 00:05:28.331 10:05:34 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.331 10:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:28.590 /dev/nbd0 00:05:28.590 10:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.590 10:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.590 1+0 records in 00:05:28.590 1+0 records out 00:05:28.590 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250758 s, 16.3 MB/s 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:28.590 10:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:28.590 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.590 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.590 10:05:34 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.850 /dev/nbd1 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.850 1+0 records in 00:05:28.850 1+0 records out 00:05:28.850 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325929 s, 12.6 MB/s 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:28.850 10:05:34 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.850 10:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:29.110 { 00:05:29.110 "nbd_device": "/dev/nbd0", 00:05:29.110 "bdev_name": "Malloc0" 00:05:29.110 }, 00:05:29.110 { 00:05:29.110 "nbd_device": "/dev/nbd1", 00:05:29.110 "bdev_name": "Malloc1" 00:05:29.110 } 00:05:29.110 ]' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:29.110 { 00:05:29.110 "nbd_device": "/dev/nbd0", 00:05:29.110 "bdev_name": "Malloc0" 00:05:29.110 }, 00:05:29.110 { 00:05:29.110 "nbd_device": "/dev/nbd1", 00:05:29.110 "bdev_name": "Malloc1" 00:05:29.110 } 00:05:29.110 ]' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:29.110 /dev/nbd1' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:29.110 /dev/nbd1' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:29.110 256+0 records in 00:05:29.110 256+0 records out 00:05:29.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00692555 s, 151 MB/s 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:29.110 256+0 records in 00:05:29.110 256+0 records out 00:05:29.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0164522 s, 63.7 MB/s 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:29.110 256+0 records in 00:05:29.110 256+0 records out 00:05:29.110 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0181424 s, 57.8 MB/s 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.110 10:05:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:29.370 10:05:35 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:29.638 10:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.932 10:05:35 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.932 10:05:35 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:30.193 10:05:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:31.132 [2024-11-04 10:05:36.596525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:31.132 [2024-11-04 10:05:36.695139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.132 [2024-11-04 10:05:36.695166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.132 [2024-11-04 10:05:36.820893] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:31.132 [2024-11-04 10:05:36.820960] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:33.669 10:05:38 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:33.669 spdk_app_start Round 1 00:05:33.669 10:05:38 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:33.669 10:05:38 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58428 ']' 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:33.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:33.669 10:05:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:33.670 10:05:39 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:33.670 10:05:39 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:33.670 10:05:39 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.670 Malloc0 00:05:33.670 10:05:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.929 Malloc1 00:05:33.929 10:05:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.929 10:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:34.188 /dev/nbd0 00:05:34.189 10:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:34.189 10:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.189 1+0 records in 00:05:34.189 1+0 records out 00:05:34.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000251452 s, 16.3 MB/s 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:34.189 10:05:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:34.189 10:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.189 10:05:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.189 10:05:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:34.447 /dev/nbd1 00:05:34.447 10:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:34.447 10:05:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:34.447 10:05:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:34.447 1+0 records in 00:05:34.447 1+0 records out 00:05:34.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258558 s, 15.8 MB/s 00:05:34.447 10:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.447 10:05:40 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:34.447 10:05:40 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:34.447 10:05:40 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:34.447 10:05:40 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:34.447 10:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:34.447 10:05:40 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:34.447 10:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.447 10:05:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.447 10:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:34.709 { 00:05:34.709 "nbd_device": "/dev/nbd0", 00:05:34.709 "bdev_name": "Malloc0" 00:05:34.709 }, 00:05:34.709 { 00:05:34.709 "nbd_device": "/dev/nbd1", 00:05:34.709 "bdev_name": "Malloc1" 00:05:34.709 } 00:05:34.709 ]' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:34.709 { 00:05:34.709 "nbd_device": "/dev/nbd0", 00:05:34.709 "bdev_name": "Malloc0" 00:05:34.709 }, 00:05:34.709 { 00:05:34.709 "nbd_device": "/dev/nbd1", 00:05:34.709 "bdev_name": "Malloc1" 00:05:34.709 } 00:05:34.709 ]' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:34.709 /dev/nbd1' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:34.709 /dev/nbd1' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:34.709 256+0 records in 00:05:34.709 256+0 records out 00:05:34.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.006514 s, 161 MB/s 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:34.709 256+0 records in 00:05:34.709 256+0 records out 00:05:34.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179823 s, 58.3 MB/s 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:34.709 256+0 records in 00:05:34.709 256+0 records out 00:05:34.709 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0170537 s, 61.5 MB/s 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.709 10:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.990 10:05:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:35.252 10:05:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:35.252 10:05:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:35.823 10:05:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:36.450 [2024-11-04 10:05:41.835868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:36.450 [2024-11-04 10:05:41.920423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.450 [2024-11-04 10:05:41.920447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.450 [2024-11-04 10:05:42.020949] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.450 [2024-11-04 10:05:42.021006] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:38.997 10:05:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.997 spdk_app_start Round 2 00:05:38.997 10:05:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:38.997 10:05:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58428 ']' 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:38.997 10:05:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:38.997 10:05:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.997 Malloc0 00:05:39.258 10:05:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.258 Malloc1 00:05:39.258 10:05:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.258 10:05:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:39.564 /dev/nbd0 00:05:39.564 10:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:39.564 10:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:39.564 10:05:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:39.564 10:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.564 10:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.564 10:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.565 1+0 records in 00:05:39.565 1+0 records out 00:05:39.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000207083 s, 19.8 MB/s 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.565 10:05:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.565 10:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.565 10:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.565 10:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:39.825 /dev/nbd1 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:39.825 1+0 records in 00:05:39.825 1+0 records out 00:05:39.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000165955 s, 24.7 MB/s 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:39.825 10:05:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.825 10:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.087 { 00:05:40.087 "nbd_device": "/dev/nbd0", 00:05:40.087 "bdev_name": "Malloc0" 00:05:40.087 }, 00:05:40.087 { 00:05:40.087 "nbd_device": "/dev/nbd1", 00:05:40.087 "bdev_name": "Malloc1" 00:05:40.087 } 00:05:40.087 ]' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.087 { 00:05:40.087 "nbd_device": "/dev/nbd0", 00:05:40.087 "bdev_name": "Malloc0" 00:05:40.087 }, 00:05:40.087 { 00:05:40.087 "nbd_device": "/dev/nbd1", 00:05:40.087 "bdev_name": "Malloc1" 00:05:40.087 } 00:05:40.087 ]' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.087 /dev/nbd1' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.087 /dev/nbd1' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.087 256+0 records in 00:05:40.087 256+0 records out 00:05:40.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00683106 s, 154 MB/s 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.087 10:05:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.087 256+0 records in 00:05:40.087 256+0 records out 00:05:40.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151571 s, 69.2 MB/s 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.349 256+0 records in 00:05:40.349 256+0 records out 00:05:40.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166458 s, 63.0 MB/s 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.349 10:05:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.349 10:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.611 10:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.872 10:05:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.872 10:05:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.132 10:05:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.703 [2024-11-04 10:05:47.418337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.964 [2024-11-04 10:05:47.500943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.964 [2024-11-04 10:05:47.501085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.964 [2024-11-04 10:05:47.604813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.964 [2024-11-04 10:05:47.604863] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:44.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.507 10:05:49 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58428 /var/tmp/spdk-nbd.sock 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58428 ']' 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:44.507 10:05:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:44.507 10:05:50 event.app_repeat -- event/event.sh@39 -- # killprocess 58428 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58428 ']' 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58428 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58428 00:05:44.507 killing process with pid 58428 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58428' 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58428 00:05:44.507 10:05:50 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58428 00:05:45.078 spdk_app_start is called in Round 0. 00:05:45.078 Shutdown signal received, stop current app iteration 00:05:45.078 Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 reinitialization... 00:05:45.078 spdk_app_start is called in Round 1. 00:05:45.078 Shutdown signal received, stop current app iteration 00:05:45.078 Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 reinitialization... 00:05:45.078 spdk_app_start is called in Round 2. 00:05:45.078 Shutdown signal received, stop current app iteration 00:05:45.078 Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 reinitialization... 00:05:45.078 spdk_app_start is called in Round 3. 00:05:45.078 Shutdown signal received, stop current app iteration 00:05:45.078 10:05:50 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:45.078 10:05:50 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:45.078 00:05:45.078 real 0m17.986s 00:05:45.078 user 0m39.510s 00:05:45.078 sys 0m2.074s 00:05:45.078 10:05:50 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:45.078 10:05:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.078 ************************************ 00:05:45.078 END TEST app_repeat 00:05:45.078 ************************************ 00:05:45.078 10:05:50 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:45.078 10:05:50 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.078 10:05:50 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.078 10:05:50 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.078 10:05:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.078 ************************************ 00:05:45.078 START TEST cpu_locks 00:05:45.078 ************************************ 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:45.078 * Looking for test storage... 00:05:45.078 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.078 10:05:50 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:45.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.078 --rc genhtml_branch_coverage=1 00:05:45.078 --rc genhtml_function_coverage=1 00:05:45.078 --rc genhtml_legend=1 00:05:45.078 --rc geninfo_all_blocks=1 00:05:45.078 --rc geninfo_unexecuted_blocks=1 00:05:45.078 00:05:45.078 ' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:45.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.078 --rc genhtml_branch_coverage=1 00:05:45.078 --rc genhtml_function_coverage=1 00:05:45.078 --rc genhtml_legend=1 00:05:45.078 --rc geninfo_all_blocks=1 00:05:45.078 --rc geninfo_unexecuted_blocks=1 00:05:45.078 00:05:45.078 ' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:45.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.078 --rc genhtml_branch_coverage=1 00:05:45.078 --rc genhtml_function_coverage=1 00:05:45.078 --rc genhtml_legend=1 00:05:45.078 --rc geninfo_all_blocks=1 00:05:45.078 --rc geninfo_unexecuted_blocks=1 00:05:45.078 00:05:45.078 ' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:45.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.078 --rc genhtml_branch_coverage=1 00:05:45.078 --rc genhtml_function_coverage=1 00:05:45.078 --rc genhtml_legend=1 00:05:45.078 --rc geninfo_all_blocks=1 00:05:45.078 --rc geninfo_unexecuted_blocks=1 00:05:45.078 00:05:45.078 ' 00:05:45.078 10:05:50 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:45.078 10:05:50 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:45.078 10:05:50 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:45.078 10:05:50 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:45.078 10:05:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.078 ************************************ 00:05:45.078 START TEST default_locks 00:05:45.078 ************************************ 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58864 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58864 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58864 ']' 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.078 10:05:50 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:45.339 [2024-11-04 10:05:50.853492] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:45.339 [2024-11-04 10:05:50.853594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58864 ] 00:05:45.339 [2024-11-04 10:05:51.003710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.600 [2024-11-04 10:05:51.087136] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58864 ']' 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:46.172 killing process with pid 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58864' 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58864 00:05:46.172 10:05:51 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58864 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58864 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58864 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:47.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58864 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58864 ']' 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.558 ERROR: process (pid: 58864) is no longer running 00:05:47.558 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58864) - No such process 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:47.558 00:05:47.558 real 0m2.357s 00:05:47.558 user 0m2.357s 00:05:47.558 sys 0m0.431s 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.558 ************************************ 00:05:47.558 END TEST default_locks 00:05:47.558 ************************************ 00:05:47.558 10:05:53 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.558 10:05:53 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:47.558 10:05:53 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.558 10:05:53 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.558 10:05:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:47.558 ************************************ 00:05:47.558 START TEST default_locks_via_rpc 00:05:47.558 ************************************ 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58917 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58917 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58917 ']' 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:47.558 10:05:53 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:47.558 [2024-11-04 10:05:53.258401] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:47.558 [2024-11-04 10:05:53.258532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58917 ] 00:05:47.816 [2024-11-04 10:05:53.414895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.816 [2024-11-04 10:05:53.499452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58917 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58917 00:05:48.381 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58917 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58917 ']' 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58917 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58917 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.639 killing process with pid 58917 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58917' 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58917 00:05:48.639 10:05:54 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58917 00:05:50.046 00:05:50.046 real 0m2.374s 00:05:50.046 user 0m2.386s 00:05:50.046 sys 0m0.448s 00:05:50.046 10:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:50.046 10:05:55 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.046 ************************************ 00:05:50.046 END TEST default_locks_via_rpc 00:05:50.046 ************************************ 00:05:50.046 10:05:55 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:50.046 10:05:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:50.046 10:05:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:50.046 10:05:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:50.046 ************************************ 00:05:50.046 START TEST non_locking_app_on_locked_coremask 00:05:50.046 ************************************ 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58975 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58975 /var/tmp/spdk.sock 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58975 ']' 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.046 10:05:55 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:50.046 [2024-11-04 10:05:55.682320] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:50.046 [2024-11-04 10:05:55.682451] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58975 ] 00:05:50.305 [2024-11-04 10:05:55.836363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.305 [2024-11-04 10:05:55.943702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.873 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.873 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:50.873 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58991 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58991 /var/tmp/spdk2.sock 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58991 ']' 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.874 10:05:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:51.135 [2024-11-04 10:05:56.652068] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:51.135 [2024-11-04 10:05:56.652238] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58991 ] 00:05:51.135 [2024-11-04 10:05:56.835395] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:51.135 [2024-11-04 10:05:56.835482] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.396 [2024-11-04 10:05:57.122665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.827 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:52.827 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:52.827 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58975 00:05:52.827 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58975 00:05:52.827 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58975 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58975 ']' 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58975 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58975 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:53.396 killing process with pid 58975 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58975' 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58975 00:05:53.396 10:05:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58975 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58991 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58991 ']' 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58991 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58991 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:56.689 killing process with pid 58991 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58991' 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58991 00:05:56.689 10:06:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58991 00:05:57.632 00:05:57.632 real 0m7.448s 00:05:57.632 user 0m7.606s 00:05:57.632 sys 0m1.025s 00:05:57.632 10:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:57.632 10:06:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.632 ************************************ 00:05:57.632 END TEST non_locking_app_on_locked_coremask 00:05:57.632 ************************************ 00:05:57.632 10:06:03 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:57.632 10:06:03 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.632 10:06:03 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.632 10:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.632 ************************************ 00:05:57.632 START TEST locking_app_on_unlocked_coremask 00:05:57.632 ************************************ 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59098 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59098 /var/tmp/spdk.sock 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59098 ']' 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:57.632 10:06:03 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.632 [2024-11-04 10:06:03.167006] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:57.632 [2024-11-04 10:06:03.167135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59098 ] 00:05:57.632 [2024-11-04 10:06:03.328976] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.632 [2024-11-04 10:06:03.329029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.893 [2024-11-04 10:06:03.433426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59114 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59114 /var/tmp/spdk2.sock 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59114 ']' 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:58.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:58.462 10:06:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.724 [2024-11-04 10:06:04.208741] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:05:58.724 [2024-11-04 10:06:04.208921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59114 ] 00:05:58.724 [2024-11-04 10:06:04.398742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.985 [2024-11-04 10:06:04.605073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.373 10:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.373 10:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:00.373 10:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59114 00:06:00.373 10:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59114 00:06:00.373 10:06:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59098 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59098 ']' 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59098 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59098 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:00.373 killing process with pid 59098 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59098' 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59098 00:06:00.373 10:06:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59098 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59114 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59114 ']' 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59114 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59114 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:03.671 killing process with pid 59114 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59114' 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59114 00:06:03.671 10:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59114 00:06:04.612 00:06:04.612 real 0m7.244s 00:06:04.612 user 0m7.524s 00:06:04.612 sys 0m0.895s 00:06:04.612 10:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:04.612 10:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.612 ************************************ 00:06:04.612 END TEST locking_app_on_unlocked_coremask 00:06:04.612 ************************************ 00:06:04.874 10:06:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.874 10:06:10 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:04.874 10:06:10 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:04.874 10:06:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.874 ************************************ 00:06:04.874 START TEST locking_app_on_locked_coremask 00:06:04.874 ************************************ 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:04.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59216 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59216 /var/tmp/spdk.sock 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59216 ']' 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.874 10:06:10 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.874 [2024-11-04 10:06:10.451570] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:04.874 [2024-11-04 10:06:10.451705] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59216 ] 00:06:04.874 [2024-11-04 10:06:10.604681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.134 [2024-11-04 10:06:10.707072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59232 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59232 /var/tmp/spdk2.sock 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59232 /var/tmp/spdk2.sock 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59232 /var/tmp/spdk2.sock 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59232 ']' 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:05.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:05.706 10:06:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:05.706 [2024-11-04 10:06:11.399611] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:05.706 [2024-11-04 10:06:11.399739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59232 ] 00:06:05.967 [2024-11-04 10:06:11.572709] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59216 has claimed it. 00:06:05.967 [2024-11-04 10:06:11.572768] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:06.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59232) - No such process 00:06:06.540 ERROR: process (pid: 59232) is no longer running 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59216 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59216 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59216 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59216 ']' 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59216 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:06.540 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59216 00:06:06.801 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:06.801 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:06.801 killing process with pid 59216 00:06:06.801 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59216' 00:06:06.801 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59216 00:06:06.801 10:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59216 00:06:08.226 00:06:08.226 real 0m3.457s 00:06:08.226 user 0m3.747s 00:06:08.226 sys 0m0.511s 00:06:08.226 10:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.226 10:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.226 ************************************ 00:06:08.226 END TEST locking_app_on_locked_coremask 00:06:08.226 ************************************ 00:06:08.226 10:06:13 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:08.226 10:06:13 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.226 10:06:13 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.226 10:06:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.226 ************************************ 00:06:08.226 START TEST locking_overlapped_coremask 00:06:08.226 ************************************ 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59291 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59291 /var/tmp/spdk.sock 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59291 ']' 00:06:08.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.226 10:06:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.226 [2024-11-04 10:06:13.952777] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:08.226 [2024-11-04 10:06:13.953151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59291 ] 00:06:08.487 [2024-11-04 10:06:14.136218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.747 [2024-11-04 10:06:14.247081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.748 [2024-11-04 10:06:14.247197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.748 [2024-11-04 10:06:14.247368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59309 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59309 /var/tmp/spdk2.sock 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59309 /var/tmp/spdk2.sock 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59309 /var/tmp/spdk2.sock 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59309 ']' 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.319 10:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.319 [2024-11-04 10:06:14.925844] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:09.319 [2024-11-04 10:06:14.925960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59309 ] 00:06:09.580 [2024-11-04 10:06:15.100112] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59291 has claimed it. 00:06:09.580 [2024-11-04 10:06:15.103815] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:09.841 ERROR: process (pid: 59309) is no longer running 00:06:09.841 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59309) - No such process 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59291 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59291 ']' 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59291 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:09.841 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59291 00:06:10.102 killing process with pid 59291 00:06:10.102 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:10.102 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:10.102 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59291' 00:06:10.102 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59291 00:06:10.102 10:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59291 00:06:11.490 ************************************ 00:06:11.490 END TEST locking_overlapped_coremask 00:06:11.490 ************************************ 00:06:11.490 00:06:11.490 real 0m3.225s 00:06:11.490 user 0m8.702s 00:06:11.490 sys 0m0.446s 00:06:11.490 10:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:11.491 10:06:17 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:11.491 10:06:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:11.491 10:06:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:11.491 10:06:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.491 ************************************ 00:06:11.491 START TEST locking_overlapped_coremask_via_rpc 00:06:11.491 ************************************ 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59362 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59362 ']' 00:06:11.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:11.491 10:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.491 [2024-11-04 10:06:17.203447] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:11.491 [2024-11-04 10:06:17.203570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59362 ] 00:06:11.752 [2024-11-04 10:06:17.363866] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.752 [2024-11-04 10:06:17.364064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.752 [2024-11-04 10:06:17.468044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.752 [2024-11-04 10:06:17.468094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:11.752 [2024-11-04 10:06:17.468104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59380 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59380 ']' 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.694 10:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:12.694 [2024-11-04 10:06:18.136717] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:12.694 [2024-11-04 10:06:18.136858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59380 ] 00:06:12.694 [2024-11-04 10:06:18.311316] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:12.694 [2024-11-04 10:06:18.314795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:12.969 [2024-11-04 10:06:18.524817] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.969 [2024-11-04 10:06:18.527864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.969 [2024-11-04 10:06:18.527887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.356 [2024-11-04 10:06:19.795951] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59362 has claimed it. 00:06:14.356 request: 00:06:14.356 { 00:06:14.356 "method": "framework_enable_cpumask_locks", 00:06:14.356 "req_id": 1 00:06:14.356 } 00:06:14.356 Got JSON-RPC error response 00:06:14.356 response: 00:06:14.356 { 00:06:14.356 "code": -32603, 00:06:14.356 "message": "Failed to claim CPU core: 2" 00:06:14.356 } 00:06:14.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59362 /var/tmp/spdk.sock 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59362 ']' 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.356 10:06:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59380 /var/tmp/spdk2.sock 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59380 ']' 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:14.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.356 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:14.617 00:06:14.617 real 0m3.121s 00:06:14.617 user 0m1.129s 00:06:14.617 sys 0m0.114s 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:14.617 10:06:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:14.617 ************************************ 00:06:14.617 END TEST locking_overlapped_coremask_via_rpc 00:06:14.617 ************************************ 00:06:14.617 10:06:20 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:14.617 10:06:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:06:14.617 10:06:20 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59362 ']' 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59362 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59362 00:06:14.617 killing process with pid 59362 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59362' 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59362 00:06:14.617 10:06:20 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59362 00:06:16.579 10:06:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:06:16.579 10:06:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59380 ']' 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59380 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59380 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59380' 00:06:16.579 killing process with pid 59380 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59380 00:06:16.579 10:06:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59380 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:17.962 Process with pid 59362 is not found 00:06:17.962 Process with pid 59380 is not found 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59362 ]] 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59362 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59362 ']' 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59362 00:06:17.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59362) - No such process 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59362 is not found' 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59380 ]] 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59380 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59380 ']' 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59380 00:06:17.962 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59380) - No such process 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59380 is not found' 00:06:17.962 10:06:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:17.962 00:06:17.962 real 0m32.840s 00:06:17.962 user 0m57.345s 00:06:17.962 sys 0m4.695s 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.962 ************************************ 00:06:17.962 END TEST cpu_locks 00:06:17.962 ************************************ 00:06:17.962 10:06:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.962 ************************************ 00:06:17.962 END TEST event 00:06:17.962 ************************************ 00:06:17.962 00:06:17.962 real 0m59.122s 00:06:17.962 user 1m49.637s 00:06:17.962 sys 0m7.565s 00:06:17.962 10:06:23 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:17.962 10:06:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.962 10:06:23 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:17.962 10:06:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:17.962 10:06:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:17.962 10:06:23 -- common/autotest_common.sh@10 -- # set +x 00:06:17.962 ************************************ 00:06:17.962 START TEST thread 00:06:17.962 ************************************ 00:06:17.962 10:06:23 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:17.962 * Looking for test storage... 00:06:17.962 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:17.962 10:06:23 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:17.962 10:06:23 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:17.962 10:06:23 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:18.238 10:06:23 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:18.238 10:06:23 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.238 10:06:23 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.238 10:06:23 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.238 10:06:23 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.238 10:06:23 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.238 10:06:23 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.238 10:06:23 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.238 10:06:23 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.238 10:06:23 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.238 10:06:23 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.238 10:06:23 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.238 10:06:23 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:18.238 10:06:23 thread -- scripts/common.sh@345 -- # : 1 00:06:18.238 10:06:23 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.238 10:06:23 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.238 10:06:23 thread -- scripts/common.sh@365 -- # decimal 1 00:06:18.238 10:06:23 thread -- scripts/common.sh@353 -- # local d=1 00:06:18.238 10:06:23 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.238 10:06:23 thread -- scripts/common.sh@355 -- # echo 1 00:06:18.238 10:06:23 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.238 10:06:23 thread -- scripts/common.sh@366 -- # decimal 2 00:06:18.238 10:06:23 thread -- scripts/common.sh@353 -- # local d=2 00:06:18.238 10:06:23 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.238 10:06:23 thread -- scripts/common.sh@355 -- # echo 2 00:06:18.238 10:06:23 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.238 10:06:23 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.238 10:06:23 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.238 10:06:23 thread -- scripts/common.sh@368 -- # return 0 00:06:18.238 10:06:23 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.238 10:06:23 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:18.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.238 --rc genhtml_branch_coverage=1 00:06:18.239 --rc genhtml_function_coverage=1 00:06:18.239 --rc genhtml_legend=1 00:06:18.239 --rc geninfo_all_blocks=1 00:06:18.239 --rc geninfo_unexecuted_blocks=1 00:06:18.239 00:06:18.239 ' 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.239 --rc genhtml_branch_coverage=1 00:06:18.239 --rc genhtml_function_coverage=1 00:06:18.239 --rc genhtml_legend=1 00:06:18.239 --rc geninfo_all_blocks=1 00:06:18.239 --rc geninfo_unexecuted_blocks=1 00:06:18.239 00:06:18.239 ' 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.239 --rc genhtml_branch_coverage=1 00:06:18.239 --rc genhtml_function_coverage=1 00:06:18.239 --rc genhtml_legend=1 00:06:18.239 --rc geninfo_all_blocks=1 00:06:18.239 --rc geninfo_unexecuted_blocks=1 00:06:18.239 00:06:18.239 ' 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:18.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.239 --rc genhtml_branch_coverage=1 00:06:18.239 --rc genhtml_function_coverage=1 00:06:18.239 --rc genhtml_legend=1 00:06:18.239 --rc geninfo_all_blocks=1 00:06:18.239 --rc geninfo_unexecuted_blocks=1 00:06:18.239 00:06:18.239 ' 00:06:18.239 10:06:23 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.239 10:06:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:18.239 ************************************ 00:06:18.239 START TEST thread_poller_perf 00:06:18.239 ************************************ 00:06:18.239 10:06:23 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:18.239 [2024-11-04 10:06:23.751624] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:18.239 [2024-11-04 10:06:23.751932] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59545 ] 00:06:18.239 [2024-11-04 10:06:23.904955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.499 [2024-11-04 10:06:24.006521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.499 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:19.440 [2024-11-04T10:06:25.185Z] ====================================== 00:06:19.440 [2024-11-04T10:06:25.185Z] busy:2614578832 (cyc) 00:06:19.440 [2024-11-04T10:06:25.185Z] total_run_count: 306000 00:06:19.440 [2024-11-04T10:06:25.185Z] tsc_hz: 2600000000 (cyc) 00:06:19.440 [2024-11-04T10:06:25.185Z] ====================================== 00:06:19.440 [2024-11-04T10:06:25.185Z] poller_cost: 8544 (cyc), 3286 (nsec) 00:06:19.440 00:06:19.440 real 0m1.452s 00:06:19.440 user 0m1.283s 00:06:19.440 sys 0m0.060s 00:06:19.440 10:06:25 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:19.440 10:06:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.440 ************************************ 00:06:19.440 END TEST thread_poller_perf 00:06:19.440 ************************************ 00:06:19.700 10:06:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.700 10:06:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:19.700 10:06:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:19.700 10:06:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.700 ************************************ 00:06:19.700 START TEST thread_poller_perf 00:06:19.700 ************************************ 00:06:19.700 10:06:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:19.700 [2024-11-04 10:06:25.251213] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:19.700 [2024-11-04 10:06:25.251328] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59582 ] 00:06:19.701 [2024-11-04 10:06:25.415132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.961 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:19.961 [2024-11-04 10:06:25.524091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.343 [2024-11-04T10:06:27.088Z] ====================================== 00:06:21.343 [2024-11-04T10:06:27.088Z] busy:2603530348 (cyc) 00:06:21.344 [2024-11-04T10:06:27.089Z] total_run_count: 3737000 00:06:21.344 [2024-11-04T10:06:27.089Z] tsc_hz: 2600000000 (cyc) 00:06:21.344 [2024-11-04T10:06:27.089Z] ====================================== 00:06:21.344 [2024-11-04T10:06:27.089Z] poller_cost: 696 (cyc), 267 (nsec) 00:06:21.344 00:06:21.344 real 0m1.461s 00:06:21.344 user 0m1.277s 00:06:21.344 sys 0m0.074s 00:06:21.344 10:06:26 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.344 10:06:26 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 ************************************ 00:06:21.344 END TEST thread_poller_perf 00:06:21.344 ************************************ 00:06:21.344 10:06:26 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:21.344 00:06:21.344 real 0m3.127s 00:06:21.344 user 0m2.660s 00:06:21.344 sys 0m0.234s 00:06:21.344 ************************************ 00:06:21.344 END TEST thread 00:06:21.344 ************************************ 00:06:21.344 10:06:26 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.344 10:06:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 10:06:26 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:21.344 10:06:26 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:21.344 10:06:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.344 10:06:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.344 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 ************************************ 00:06:21.344 START TEST app_cmdline 00:06:21.344 ************************************ 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:21.344 * Looking for test storage... 00:06:21.344 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:21.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.344 10:06:26 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:21.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.344 --rc genhtml_branch_coverage=1 00:06:21.344 --rc genhtml_function_coverage=1 00:06:21.344 --rc genhtml_legend=1 00:06:21.344 --rc geninfo_all_blocks=1 00:06:21.344 --rc geninfo_unexecuted_blocks=1 00:06:21.344 00:06:21.344 ' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:21.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.344 --rc genhtml_branch_coverage=1 00:06:21.344 --rc genhtml_function_coverage=1 00:06:21.344 --rc genhtml_legend=1 00:06:21.344 --rc geninfo_all_blocks=1 00:06:21.344 --rc geninfo_unexecuted_blocks=1 00:06:21.344 00:06:21.344 ' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:21.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.344 --rc genhtml_branch_coverage=1 00:06:21.344 --rc genhtml_function_coverage=1 00:06:21.344 --rc genhtml_legend=1 00:06:21.344 --rc geninfo_all_blocks=1 00:06:21.344 --rc geninfo_unexecuted_blocks=1 00:06:21.344 00:06:21.344 ' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:21.344 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.344 --rc genhtml_branch_coverage=1 00:06:21.344 --rc genhtml_function_coverage=1 00:06:21.344 --rc genhtml_legend=1 00:06:21.344 --rc geninfo_all_blocks=1 00:06:21.344 --rc geninfo_unexecuted_blocks=1 00:06:21.344 00:06:21.344 ' 00:06:21.344 10:06:26 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:21.344 10:06:26 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59665 00:06:21.344 10:06:26 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59665 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59665 ']' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.344 10:06:26 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:21.344 10:06:26 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:21.344 [2024-11-04 10:06:26.956887] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:21.344 [2024-11-04 10:06:26.957157] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:06:21.605 [2024-11-04 10:06:27.112690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.605 [2024-11-04 10:06:27.212706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.174 10:06:27 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.174 10:06:27 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:22.174 10:06:27 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:22.435 { 00:06:22.435 "version": "SPDK v25.01-pre git sha1 3f50defde", 00:06:22.435 "fields": { 00:06:22.435 "major": 25, 00:06:22.435 "minor": 1, 00:06:22.435 "patch": 0, 00:06:22.435 "suffix": "-pre", 00:06:22.435 "commit": "3f50defde" 00:06:22.435 } 00:06:22.435 } 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:22.435 10:06:28 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:22.435 10:06:28 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:22.696 request: 00:06:22.696 { 00:06:22.696 "method": "env_dpdk_get_mem_stats", 00:06:22.696 "req_id": 1 00:06:22.696 } 00:06:22.696 Got JSON-RPC error response 00:06:22.696 response: 00:06:22.696 { 00:06:22.696 "code": -32601, 00:06:22.696 "message": "Method not found" 00:06:22.696 } 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:22.696 10:06:28 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59665 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59665 ']' 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59665 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59665 00:06:22.696 killing process with pid 59665 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59665' 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@971 -- # kill 59665 00:06:22.696 10:06:28 app_cmdline -- common/autotest_common.sh@976 -- # wait 59665 00:06:24.610 00:06:24.610 real 0m3.100s 00:06:24.610 user 0m3.416s 00:06:24.610 sys 0m0.415s 00:06:24.610 10:06:29 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.610 ************************************ 00:06:24.610 END TEST app_cmdline 00:06:24.610 ************************************ 00:06:24.610 10:06:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:24.610 10:06:29 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.610 10:06:29 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:24.610 10:06:29 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.610 10:06:29 -- common/autotest_common.sh@10 -- # set +x 00:06:24.610 ************************************ 00:06:24.610 START TEST version 00:06:24.610 ************************************ 00:06:24.610 10:06:29 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:24.610 * Looking for test storage... 00:06:24.610 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:24.610 10:06:29 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.610 10:06:29 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.610 10:06:29 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.610 10:06:30 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.610 10:06:30 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.610 10:06:30 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.610 10:06:30 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.610 10:06:30 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.610 10:06:30 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.610 10:06:30 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.610 10:06:30 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.610 10:06:30 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.610 10:06:30 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.610 10:06:30 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.610 10:06:30 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.610 10:06:30 version -- scripts/common.sh@344 -- # case "$op" in 00:06:24.610 10:06:30 version -- scripts/common.sh@345 -- # : 1 00:06:24.610 10:06:30 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.610 10:06:30 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.610 10:06:30 version -- scripts/common.sh@365 -- # decimal 1 00:06:24.610 10:06:30 version -- scripts/common.sh@353 -- # local d=1 00:06:24.610 10:06:30 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.610 10:06:30 version -- scripts/common.sh@355 -- # echo 1 00:06:24.611 10:06:30 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.611 10:06:30 version -- scripts/common.sh@366 -- # decimal 2 00:06:24.611 10:06:30 version -- scripts/common.sh@353 -- # local d=2 00:06:24.611 10:06:30 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.611 10:06:30 version -- scripts/common.sh@355 -- # echo 2 00:06:24.611 10:06:30 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.611 10:06:30 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.611 10:06:30 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.611 10:06:30 version -- scripts/common.sh@368 -- # return 0 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 version -- app/version.sh@17 -- # get_header_version major 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.611 10:06:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.611 10:06:30 version -- app/version.sh@17 -- # major=25 00:06:24.611 10:06:30 version -- app/version.sh@18 -- # get_header_version minor 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.611 10:06:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.611 10:06:30 version -- app/version.sh@18 -- # minor=1 00:06:24.611 10:06:30 version -- app/version.sh@19 -- # get_header_version patch 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.611 10:06:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.611 10:06:30 version -- app/version.sh@19 -- # patch=0 00:06:24.611 10:06:30 version -- app/version.sh@20 -- # get_header_version suffix 00:06:24.611 10:06:30 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # cut -f2 00:06:24.611 10:06:30 version -- app/version.sh@14 -- # tr -d '"' 00:06:24.611 10:06:30 version -- app/version.sh@20 -- # suffix=-pre 00:06:24.611 10:06:30 version -- app/version.sh@22 -- # version=25.1 00:06:24.611 10:06:30 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:24.611 10:06:30 version -- app/version.sh@28 -- # version=25.1rc0 00:06:24.611 10:06:30 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:24.611 10:06:30 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:24.611 10:06:30 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:24.611 10:06:30 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:24.611 00:06:24.611 real 0m0.188s 00:06:24.611 user 0m0.134s 00:06:24.611 sys 0m0.084s 00:06:24.611 ************************************ 00:06:24.611 END TEST version 00:06:24.611 ************************************ 00:06:24.611 10:06:30 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.611 10:06:30 version -- common/autotest_common.sh@10 -- # set +x 00:06:24.611 10:06:30 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:24.611 10:06:30 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:24.611 10:06:30 -- spdk/autotest.sh@194 -- # uname -s 00:06:24.611 10:06:30 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:24.611 10:06:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:24.611 10:06:30 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:24.611 10:06:30 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:24.611 10:06:30 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:24.611 10:06:30 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:24.611 10:06:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:24.611 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.611 ************************************ 00:06:24.611 START TEST blockdev_nvme 00:06:24.611 ************************************ 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:24.611 * Looking for test storage... 00:06:24.611 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:24.611 10:06:30 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:24.611 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:24.611 --rc genhtml_branch_coverage=1 00:06:24.611 --rc genhtml_function_coverage=1 00:06:24.611 --rc genhtml_legend=1 00:06:24.611 --rc geninfo_all_blocks=1 00:06:24.611 --rc geninfo_unexecuted_blocks=1 00:06:24.611 00:06:24.611 ' 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:24.611 10:06:30 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:24.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59843 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:24.611 10:06:30 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59843 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59843 ']' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.611 10:06:30 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.612 10:06:30 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:24.612 [2024-11-04 10:06:30.341499] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:24.612 [2024-11-04 10:06:30.341620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59843 ] 00:06:24.872 [2024-11-04 10:06:30.500682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.872 [2024-11-04 10:06:30.599867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:25.817 10:06:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.817 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.078 10:06:31 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:26.078 10:06:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:26.079 10:06:31 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "ad095e7f-e8c7-4a65-bbc5-232ca63c0f16"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "ad095e7f-e8c7-4a65-bbc5-232ca63c0f16",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "d954685b-e166-4c60-be08-cd6388655863"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d954685b-e166-4c60-be08-cd6388655863",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "525d4a03-1e5f-41d3-a018-e168a1c37981"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "525d4a03-1e5f-41d3-a018-e168a1c37981",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "0299cc65-d50a-4e98-8f7d-24e236b8904d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "0299cc65-d50a-4e98-8f7d-24e236b8904d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "38f855b4-1076-4226-b90a-daefb1f69815"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "38f855b4-1076-4226-b90a-daefb1f69815",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "8010c2d7-2270-437e-9b0a-781486786032"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "8010c2d7-2270-437e-9b0a-781486786032",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:26.079 10:06:31 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:26.079 10:06:31 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:26.079 10:06:31 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:26.079 10:06:31 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59843 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59843 ']' 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59843 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59843 00:06:26.079 killing process with pid 59843 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59843' 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59843 00:06:26.079 10:06:31 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59843 00:06:27.994 10:06:33 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:27.994 10:06:33 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:27.994 10:06:33 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:27.994 10:06:33 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.994 10:06:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:27.994 ************************************ 00:06:27.994 START TEST bdev_hello_world 00:06:27.994 ************************************ 00:06:27.994 10:06:33 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:27.994 [2024-11-04 10:06:33.402444] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:27.994 [2024-11-04 10:06:33.402775] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59927 ] 00:06:27.994 [2024-11-04 10:06:33.560626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.994 [2024-11-04 10:06:33.680470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.566 [2024-11-04 10:06:34.255943] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:28.566 [2024-11-04 10:06:34.256006] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:28.566 [2024-11-04 10:06:34.256028] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:28.566 [2024-11-04 10:06:34.258921] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:28.566 [2024-11-04 10:06:34.259555] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:28.566 [2024-11-04 10:06:34.259586] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:28.566 [2024-11-04 10:06:34.259942] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:28.566 00:06:28.566 [2024-11-04 10:06:34.259966] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:29.509 ************************************ 00:06:29.509 END TEST bdev_hello_world 00:06:29.509 ************************************ 00:06:29.509 00:06:29.509 real 0m1.694s 00:06:29.509 user 0m1.364s 00:06:29.509 sys 0m0.219s 00:06:29.509 10:06:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.509 10:06:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:29.509 10:06:35 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:29.509 10:06:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:29.509 10:06:35 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.509 10:06:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:29.509 ************************************ 00:06:29.509 START TEST bdev_bounds 00:06:29.509 ************************************ 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59958 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:29.509 Process bdevio pid: 59958 00:06:29.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59958' 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59958 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59958 ']' 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:29.509 10:06:35 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:29.509 [2024-11-04 10:06:35.144849] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:29.509 [2024-11-04 10:06:35.145138] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59958 ] 00:06:29.770 [2024-11-04 10:06:35.311204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:29.770 [2024-11-04 10:06:35.431512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:29.770 [2024-11-04 10:06:35.431741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.770 [2024-11-04 10:06:35.431849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.366 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:30.366 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:30.366 10:06:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:30.628 I/O targets: 00:06:30.628 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:30.628 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:30.628 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:30.628 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:30.628 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:30.628 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:30.628 00:06:30.628 00:06:30.628 CUnit - A unit testing framework for C - Version 2.1-3 00:06:30.628 http://cunit.sourceforge.net/ 00:06:30.628 00:06:30.628 00:06:30.628 Suite: bdevio tests on: Nvme3n1 00:06:30.628 Test: blockdev write read block ...passed 00:06:30.628 Test: blockdev write zeroes read block ...passed 00:06:30.628 Test: blockdev write zeroes read no split ...passed 00:06:30.628 Test: blockdev write zeroes read split ...passed 00:06:30.628 Test: blockdev write zeroes read split partial ...passed 00:06:30.628 Test: blockdev reset ...[2024-11-04 10:06:36.186585] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:30.628 [2024-11-04 10:06:36.190864] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:30.628 passed 00:06:30.628 Test: blockdev write read 8 blocks ...passed 00:06:30.628 Test: blockdev write read size > 128k ...passed 00:06:30.628 Test: blockdev write read invalid size ...passed 00:06:30.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.628 Test: blockdev write read max offset ...passed 00:06:30.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.628 Test: blockdev writev readv 8 blocks ...passed 00:06:30.628 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.628 Test: blockdev writev readv block ...passed 00:06:30.628 Test: blockdev writev readv size > 128k ...passed 00:06:30.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.628 Test: blockdev comparev and writev ...[2024-11-04 10:06:36.205955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b280a000 len:0x1000 00:06:30.628 [2024-11-04 10:06:36.206002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:30.628 passed 00:06:30.628 Test: blockdev nvme passthru rw ...passed 00:06:30.628 Test: blockdev nvme passthru vendor specific ...passed 00:06:30.628 Test: blockdev nvme admin passthru ...[2024-11-04 10:06:36.207834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:30.628 [2024-11-04 10:06:36.207873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:30.628 passed 00:06:30.628 Test: blockdev copy ...passed 00:06:30.628 Suite: bdevio tests on: Nvme2n3 00:06:30.628 Test: blockdev write read block ...passed 00:06:30.628 Test: blockdev write zeroes read block ...passed 00:06:30.628 Test: blockdev write zeroes read no split ...passed 00:06:30.628 Test: blockdev write zeroes read split ...passed 00:06:30.628 Test: blockdev write zeroes read split partial ...passed 00:06:30.628 Test: blockdev reset ...[2024-11-04 10:06:36.270331] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:30.628 [2024-11-04 10:06:36.275557] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:30.628 passed 00:06:30.628 Test: blockdev write read 8 blocks ...passed 00:06:30.628 Test: blockdev write read size > 128k ...passed 00:06:30.628 Test: blockdev write read invalid size ...passed 00:06:30.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.628 Test: blockdev write read max offset ...passed 00:06:30.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.628 Test: blockdev writev readv 8 blocks ...passed 00:06:30.628 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.628 Test: blockdev writev readv block ...passed 00:06:30.628 Test: blockdev writev readv size > 128k ...passed 00:06:30.628 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.628 Test: blockdev comparev and writev ...[2024-11-04 10:06:36.290121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9606000 len:0x1000 00:06:30.628 [2024-11-04 10:06:36.290167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:30.628 passed 00:06:30.628 Test: blockdev nvme passthru rw ...passed 00:06:30.628 Test: blockdev nvme passthru vendor specific ...passed 00:06:30.628 Test: blockdev nvme admin passthru ...[2024-11-04 10:06:36.291451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:30.628 [2024-11-04 10:06:36.291480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:30.628 passed 00:06:30.628 Test: blockdev copy ...passed 00:06:30.628 Suite: bdevio tests on: Nvme2n2 00:06:30.628 Test: blockdev write read block ...passed 00:06:30.628 Test: blockdev write zeroes read block ...passed 00:06:30.628 Test: blockdev write zeroes read no split ...passed 00:06:30.628 Test: blockdev write zeroes read split ...passed 00:06:30.628 Test: blockdev write zeroes read split partial ...passed 00:06:30.628 Test: blockdev reset ...[2024-11-04 10:06:36.352826] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:30.628 [2024-11-04 10:06:36.356363] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:30.628 passed 00:06:30.628 Test: blockdev write read 8 blocks ...passed 00:06:30.628 Test: blockdev write read size > 128k ...passed 00:06:30.628 Test: blockdev write read invalid size ...passed 00:06:30.628 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.628 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.628 Test: blockdev write read max offset ...passed 00:06:30.628 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.628 Test: blockdev writev readv 8 blocks ...passed 00:06:30.628 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.628 Test: blockdev writev readv block ...passed 00:06:30.628 Test: blockdev writev readv size > 128k ...passed 00:06:30.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.890 Test: blockdev comparev and writev ...[2024-11-04 10:06:36.372262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c123c000 len:0x1000 00:06:30.890 [2024-11-04 10:06:36.372307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme passthru rw ...passed 00:06:30.890 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:06:36.373746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:30.890 [2024-11-04 10:06:36.373773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme admin passthru ...passed 00:06:30.890 Test: blockdev copy ...passed 00:06:30.890 Suite: bdevio tests on: Nvme2n1 00:06:30.890 Test: blockdev write read block ...passed 00:06:30.890 Test: blockdev write zeroes read block ...passed 00:06:30.890 Test: blockdev write zeroes read no split ...passed 00:06:30.890 Test: blockdev write zeroes read split ...passed 00:06:30.890 Test: blockdev write zeroes read split partial ...passed 00:06:30.890 Test: blockdev reset ...[2024-11-04 10:06:36.436291] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:30.890 [2024-11-04 10:06:36.439652] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:06:30.890 Test: blockdev write read 8 blocks ...successful. 00:06:30.890 passed 00:06:30.890 Test: blockdev write read size > 128k ...passed 00:06:30.890 Test: blockdev write read invalid size ...passed 00:06:30.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.890 Test: blockdev write read max offset ...passed 00:06:30.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.890 Test: blockdev writev readv 8 blocks ...passed 00:06:30.890 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.890 Test: blockdev writev readv block ...passed 00:06:30.890 Test: blockdev writev readv size > 128k ...passed 00:06:30.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.890 Test: blockdev comparev and writev ...[2024-11-04 10:06:36.455315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1238000 len:0x1000 00:06:30.890 [2024-11-04 10:06:36.455474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme passthru rw ...passed 00:06:30.890 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:06:36.458068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:30.890 [2024-11-04 10:06:36.458289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme admin passthru ...passed 00:06:30.890 Test: blockdev copy ...passed 00:06:30.890 Suite: bdevio tests on: Nvme1n1 00:06:30.890 Test: blockdev write read block ...passed 00:06:30.890 Test: blockdev write zeroes read block ...passed 00:06:30.890 Test: blockdev write zeroes read no split ...passed 00:06:30.890 Test: blockdev write zeroes read split ...passed 00:06:30.890 Test: blockdev write zeroes read split partial ...passed 00:06:30.890 Test: blockdev reset ...[2024-11-04 10:06:36.523008] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:30.890 [2024-11-04 10:06:36.526309] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:30.890 passed 00:06:30.890 Test: blockdev write read 8 blocks ...passed 00:06:30.890 Test: blockdev write read size > 128k ...passed 00:06:30.890 Test: blockdev write read invalid size ...passed 00:06:30.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.890 Test: blockdev write read max offset ...passed 00:06:30.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.890 Test: blockdev writev readv 8 blocks ...passed 00:06:30.890 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.890 Test: blockdev writev readv block ...passed 00:06:30.890 Test: blockdev writev readv size > 128k ...passed 00:06:30.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.890 Test: blockdev comparev and writev ...[2024-11-04 10:06:36.544716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1234000 len:0x1000 00:06:30.890 [2024-11-04 10:06:36.544760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme passthru rw ...passed 00:06:30.890 Test: blockdev nvme passthru vendor specific ...passed 00:06:30.890 Test: blockdev nvme admin passthru ...[2024-11-04 10:06:36.546931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:30.890 [2024-11-04 10:06:36.546959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:30.890 passed 00:06:30.890 Test: blockdev copy ...passed 00:06:30.890 Suite: bdevio tests on: Nvme0n1 00:06:30.890 Test: blockdev write read block ...passed 00:06:30.890 Test: blockdev write zeroes read block ...passed 00:06:30.890 Test: blockdev write zeroes read no split ...passed 00:06:30.890 Test: blockdev write zeroes read split ...passed 00:06:30.890 Test: blockdev write zeroes read split partial ...passed 00:06:30.890 Test: blockdev reset ...[2024-11-04 10:06:36.608688] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:30.890 [2024-11-04 10:06:36.612498] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller passed 00:06:30.890 Test: blockdev write read 8 blocks ...successful. 00:06:30.890 passed 00:06:30.890 Test: blockdev write read size > 128k ...passed 00:06:30.890 Test: blockdev write read invalid size ...passed 00:06:30.890 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:30.890 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:30.890 Test: blockdev write read max offset ...passed 00:06:30.890 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:30.890 Test: blockdev writev readv 8 blocks ...passed 00:06:30.890 Test: blockdev writev readv 30 x 1block ...passed 00:06:30.890 Test: blockdev writev readv block ...passed 00:06:30.890 Test: blockdev writev readv size > 128k ...passed 00:06:30.890 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:30.890 Test: blockdev comparev and writev ...passed 00:06:30.890 Test: blockdev nvme passthru rw ...[2024-11-04 10:06:36.627835] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:30.890 separate metadata which is not supported yet. 00:06:30.890 passed 00:06:30.890 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:06:36.628726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:30.890 [2024-11-04 10:06:36.628759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:30.890 passed 00:06:31.152 Test: blockdev nvme admin passthru ...passed 00:06:31.152 Test: blockdev copy ...passed 00:06:31.152 00:06:31.152 Run Summary: Type Total Ran Passed Failed Inactive 00:06:31.152 suites 6 6 n/a 0 0 00:06:31.152 tests 138 138 138 0 0 00:06:31.152 asserts 893 893 893 0 n/a 00:06:31.152 00:06:31.152 Elapsed time = 1.261 seconds 00:06:31.152 0 00:06:31.152 10:06:36 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59958 00:06:31.152 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59958 ']' 00:06:31.152 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59958 00:06:31.152 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:31.152 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59958 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59958' 00:06:31.153 killing process with pid 59958 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59958 00:06:31.153 10:06:36 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59958 00:06:31.725 10:06:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:31.726 00:06:31.726 real 0m2.282s 00:06:31.726 user 0m5.735s 00:06:31.726 sys 0m0.330s 00:06:31.726 10:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:31.726 10:06:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 ************************************ 00:06:31.726 END TEST bdev_bounds 00:06:31.726 ************************************ 00:06:31.726 10:06:37 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:31.726 10:06:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:31.726 10:06:37 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.726 10:06:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 ************************************ 00:06:31.726 START TEST bdev_nbd 00:06:31.726 ************************************ 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60023 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60023 /var/tmp/spdk-nbd.sock 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 60023 ']' 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:31.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:31.726 10:06:37 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:31.726 [2024-11-04 10:06:37.464310] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:31.726 [2024-11-04 10:06:37.464755] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.987 [2024-11-04 10:06:37.628342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.987 [2024-11-04 10:06:37.728889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.928 1+0 records in 00:06:32.928 1+0 records out 00:06:32.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00038138 s, 10.7 MB/s 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.928 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.189 1+0 records in 00:06:33.189 1+0 records out 00:06:33.189 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000596096 s, 6.9 MB/s 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.189 10:06:38 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.452 1+0 records in 00:06:33.452 1+0 records out 00:06:33.452 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000571775 s, 7.2 MB/s 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.452 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.714 1+0 records in 00:06:33.714 1+0 records out 00:06:33.714 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040608 s, 10.1 MB/s 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.714 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.975 1+0 records in 00:06:33.975 1+0 records out 00:06:33.975 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00044045 s, 9.3 MB/s 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.975 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:34.237 1+0 records in 00:06:34.237 1+0 records out 00:06:34.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000517127 s, 7.9 MB/s 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd0", 00:06:34.237 "bdev_name": "Nvme0n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd1", 00:06:34.237 "bdev_name": "Nvme1n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd2", 00:06:34.237 "bdev_name": "Nvme2n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd3", 00:06:34.237 "bdev_name": "Nvme2n2" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd4", 00:06:34.237 "bdev_name": "Nvme2n3" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd5", 00:06:34.237 "bdev_name": "Nvme3n1" 00:06:34.237 } 00:06:34.237 ]' 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd0", 00:06:34.237 "bdev_name": "Nvme0n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd1", 00:06:34.237 "bdev_name": "Nvme1n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd2", 00:06:34.237 "bdev_name": "Nvme2n1" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd3", 00:06:34.237 "bdev_name": "Nvme2n2" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd4", 00:06:34.237 "bdev_name": "Nvme2n3" 00:06:34.237 }, 00:06:34.237 { 00:06:34.237 "nbd_device": "/dev/nbd5", 00:06:34.237 "bdev_name": "Nvme3n1" 00:06:34.237 } 00:06:34.237 ]' 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:34.237 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.238 10:06:39 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:34.498 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.499 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.759 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.020 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.286 10:06:40 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:35.565 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.566 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.828 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:36.089 /dev/nbd0 00:06:36.089 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:36.089 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:36.089 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.090 1+0 records in 00:06:36.090 1+0 records out 00:06:36.090 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000542152 s, 7.6 MB/s 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.090 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:36.350 /dev/nbd1 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.350 1+0 records in 00:06:36.350 1+0 records out 00:06:36.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382833 s, 10.7 MB/s 00:06:36.350 10:06:41 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.350 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:36.611 /dev/nbd10 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.611 1+0 records in 00:06:36.611 1+0 records out 00:06:36.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411261 s, 10.0 MB/s 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.611 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:36.872 /dev/nbd11 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.872 1+0 records in 00:06:36.872 1+0 records out 00:06:36.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378992 s, 10.8 MB/s 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.872 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:37.132 /dev/nbd12 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.132 1+0 records in 00:06:37.132 1+0 records out 00:06:37.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487384 s, 8.4 MB/s 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.132 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:37.393 /dev/nbd13 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:37.393 1+0 records in 00:06:37.393 1+0 records out 00:06:37.393 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422429 s, 9.7 MB/s 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:37.393 10:06:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd0", 00:06:37.654 "bdev_name": "Nvme0n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd1", 00:06:37.654 "bdev_name": "Nvme1n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd10", 00:06:37.654 "bdev_name": "Nvme2n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd11", 00:06:37.654 "bdev_name": "Nvme2n2" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd12", 00:06:37.654 "bdev_name": "Nvme2n3" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd13", 00:06:37.654 "bdev_name": "Nvme3n1" 00:06:37.654 } 00:06:37.654 ]' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd0", 00:06:37.654 "bdev_name": "Nvme0n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd1", 00:06:37.654 "bdev_name": "Nvme1n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd10", 00:06:37.654 "bdev_name": "Nvme2n1" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd11", 00:06:37.654 "bdev_name": "Nvme2n2" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd12", 00:06:37.654 "bdev_name": "Nvme2n3" 00:06:37.654 }, 00:06:37.654 { 00:06:37.654 "nbd_device": "/dev/nbd13", 00:06:37.654 "bdev_name": "Nvme3n1" 00:06:37.654 } 00:06:37.654 ]' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:37.654 /dev/nbd1 00:06:37.654 /dev/nbd10 00:06:37.654 /dev/nbd11 00:06:37.654 /dev/nbd12 00:06:37.654 /dev/nbd13' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:37.654 /dev/nbd1 00:06:37.654 /dev/nbd10 00:06:37.654 /dev/nbd11 00:06:37.654 /dev/nbd12 00:06:37.654 /dev/nbd13' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:37.654 256+0 records in 00:06:37.654 256+0 records out 00:06:37.654 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010343 s, 101 MB/s 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.654 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:37.915 256+0 records in 00:06:37.915 256+0 records out 00:06:37.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.152054 s, 6.9 MB/s 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.915 256+0 records in 00:06:37.915 256+0 records out 00:06:37.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0921897 s, 11.4 MB/s 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:37.915 256+0 records in 00:06:37.915 256+0 records out 00:06:37.915 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.133785 s, 7.8 MB/s 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.915 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:38.175 256+0 records in 00:06:38.175 256+0 records out 00:06:38.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.104242 s, 10.1 MB/s 00:06:38.175 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.175 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:38.175 256+0 records in 00:06:38.175 256+0 records out 00:06:38.175 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.116038 s, 9.0 MB/s 00:06:38.175 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:38.175 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:38.435 256+0 records in 00:06:38.435 256+0 records out 00:06:38.435 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.092587 s, 11.3 MB/s 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:38.435 10:06:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.435 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.696 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.958 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.218 10:06:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.479 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.741 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:40.002 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:40.262 malloc_lvol_verify 00:06:40.262 10:06:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:40.524 68227342-c982-4fdc-9456-046e75288e39 00:06:40.524 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:40.524 fe884a06-dd0e-42a5-aa6c-734d06a96f4a 00:06:40.524 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:40.786 /dev/nbd0 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:40.786 mke2fs 1.47.0 (5-Feb-2023) 00:06:40.786 Discarding device blocks: 0/4096 done 00:06:40.786 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:40.786 00:06:40.786 Allocating group tables: 0/1 done 00:06:40.786 Writing inode tables: 0/1 done 00:06:40.786 Creating journal (1024 blocks): done 00:06:40.786 Writing superblocks and filesystem accounting information: 0/1 done 00:06:40.786 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:40.786 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.787 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:40.787 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.787 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60023 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 60023 ']' 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 60023 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60023 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:41.048 killing process with pid 60023 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60023' 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 60023 00:06:41.048 10:06:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 60023 00:06:42.000 10:06:47 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:42.000 00:06:42.000 real 0m10.113s 00:06:42.000 user 0m14.356s 00:06:42.000 sys 0m3.244s 00:06:42.000 10:06:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:42.000 10:06:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:42.000 ************************************ 00:06:42.000 END TEST bdev_nbd 00:06:42.000 ************************************ 00:06:42.000 10:06:47 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:42.000 10:06:47 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:42.000 skipping fio tests on NVMe due to multi-ns failures. 00:06:42.000 10:06:47 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:42.000 10:06:47 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:42.000 10:06:47 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.000 10:06:47 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:42.000 10:06:47 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:42.000 10:06:47 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.000 ************************************ 00:06:42.000 START TEST bdev_verify 00:06:42.000 ************************************ 00:06:42.000 10:06:47 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.000 [2024-11-04 10:06:47.609441] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:42.000 [2024-11-04 10:06:47.609560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60401 ] 00:06:42.261 [2024-11-04 10:06:47.770576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.261 [2024-11-04 10:06:47.874727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.261 [2024-11-04 10:06:47.874741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.830 Running I/O for 5 seconds... 00:06:45.150 23360.00 IOPS, 91.25 MiB/s [2024-11-04T10:06:51.835Z] 23968.00 IOPS, 93.62 MiB/s [2024-11-04T10:06:52.776Z] 24128.00 IOPS, 94.25 MiB/s [2024-11-04T10:06:53.717Z] 23712.00 IOPS, 92.62 MiB/s [2024-11-04T10:06:53.717Z] 24230.40 IOPS, 94.65 MiB/s 00:06:47.972 Latency(us) 00:06:47.972 [2024-11-04T10:06:53.717Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:47.972 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0xbd0bd 00:06:47.972 Nvme0n1 : 5.04 1982.17 7.74 0.00 0.00 64375.01 12401.43 68964.04 00:06:47.972 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:47.972 Nvme0n1 : 5.04 2006.17 7.84 0.00 0.00 63546.15 12905.55 68964.04 00:06:47.972 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0xa0000 00:06:47.972 Nvme1n1 : 5.04 1981.62 7.74 0.00 0.00 64258.96 13812.97 59284.87 00:06:47.972 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0xa0000 length 0xa0000 00:06:47.972 Nvme1n1 : 5.06 2011.42 7.86 0.00 0.00 63401.06 15022.87 64527.75 00:06:47.972 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0x80000 00:06:47.972 Nvme2n1 : 5.04 1981.07 7.74 0.00 0.00 64182.01 13611.32 56461.78 00:06:47.972 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x80000 length 0x80000 00:06:47.972 Nvme2n1 : 5.04 2005.02 7.83 0.00 0.00 63411.14 15426.17 65737.65 00:06:47.972 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0x80000 00:06:47.972 Nvme2n2 : 5.07 1995.06 7.79 0.00 0.00 63704.91 8570.09 56058.49 00:06:47.972 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x80000 length 0x80000 00:06:47.972 Nvme2n2 : 5.06 2010.56 7.85 0.00 0.00 63080.35 6906.49 63317.86 00:06:47.972 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0x80000 00:06:47.972 Nvme2n3 : 5.07 1994.47 7.79 0.00 0.00 63577.08 8872.57 58478.28 00:06:47.972 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x80000 length 0x80000 00:06:47.972 Nvme2n3 : 5.07 2018.86 7.89 0.00 0.00 62773.05 8469.27 64527.75 00:06:47.972 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x0 length 0x20000 00:06:47.972 Nvme3n1 : 5.07 1993.92 7.79 0.00 0.00 63456.00 8418.86 61704.66 00:06:47.972 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:47.972 Verification LBA range: start 0x20000 length 0x20000 00:06:47.972 Nvme3n1 : 5.07 2018.22 7.88 0.00 0.00 62658.96 7309.78 69367.34 00:06:47.972 [2024-11-04T10:06:53.717Z] =================================================================================================================== 00:06:47.972 [2024-11-04T10:06:53.718Z] Total : 23998.55 93.74 0.00 0.00 63531.02 6906.49 69367.34 00:06:49.355 00:06:49.356 real 0m7.341s 00:06:49.356 user 0m13.771s 00:06:49.356 sys 0m0.207s 00:06:49.356 10:06:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:49.356 ************************************ 00:06:49.356 END TEST bdev_verify 00:06:49.356 ************************************ 00:06:49.356 10:06:54 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.356 10:06:54 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.356 10:06:54 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:49.356 10:06:54 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:49.356 10:06:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.356 ************************************ 00:06:49.356 START TEST bdev_verify_big_io 00:06:49.356 ************************************ 00:06:49.356 10:06:54 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.356 [2024-11-04 10:06:54.990583] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:49.356 [2024-11-04 10:06:54.990711] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60495 ] 00:06:49.617 [2024-11-04 10:06:55.147985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:49.617 [2024-11-04 10:06:55.254143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.617 [2024-11-04 10:06:55.254405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.187 Running I/O for 5 seconds... 00:06:55.777 2007.00 IOPS, 125.44 MiB/s [2024-11-04T10:07:02.092Z] 2373.50 IOPS, 148.34 MiB/s [2024-11-04T10:07:02.092Z] 2848.67 IOPS, 178.04 MiB/s 00:06:56.347 Latency(us) 00:06:56.347 [2024-11-04T10:07:02.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:56.347 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x0 length 0xbd0b 00:06:56.347 Nvme0n1 : 5.62 113.85 7.12 0.00 0.00 1084096.20 23693.78 1032444.06 00:06:56.347 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:56.347 Nvme0n1 : 5.69 117.70 7.36 0.00 0.00 1043916.62 10536.17 1084066.26 00:06:56.347 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x0 length 0xa000 00:06:56.347 Nvme1n1 : 5.72 116.33 7.27 0.00 0.00 1034570.76 100824.62 942105.21 00:06:56.347 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0xa000 length 0xa000 00:06:56.347 Nvme1n1 : 5.69 109.24 6.83 0.00 0.00 1081493.45 100421.32 1729343.80 00:06:56.347 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x0 length 0x8000 00:06:56.347 Nvme2n1 : 5.77 116.57 7.29 0.00 0.00 997297.47 101631.21 1084066.26 00:06:56.347 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x8000 length 0x8000 00:06:56.347 Nvme2n1 : 5.87 118.50 7.41 0.00 0.00 967363.51 45169.43 1768060.46 00:06:56.347 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x0 length 0x8000 00:06:56.347 Nvme2n2 : 5.78 121.88 7.62 0.00 0.00 939836.94 46782.62 935652.43 00:06:56.347 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x8000 length 0x8000 00:06:56.347 Nvme2n2 : 5.92 122.05 7.63 0.00 0.00 908229.25 65334.35 1806777.11 00:06:56.347 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.347 Verification LBA range: start 0x0 length 0x8000 00:06:56.348 Nvme2n3 : 5.87 126.36 7.90 0.00 0.00 873557.78 37305.11 974369.08 00:06:56.348 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.348 Verification LBA range: start 0x8000 length 0x8000 00:06:56.348 Nvme2n3 : 5.95 133.15 8.32 0.00 0.00 812389.37 18450.90 1832588.21 00:06:56.348 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:56.348 Verification LBA range: start 0x0 length 0x2000 00:06:56.348 Nvme3n1 : 5.94 145.88 9.12 0.00 0.00 743952.68 2104.71 1219574.55 00:06:56.348 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:56.348 Verification LBA range: start 0x2000 length 0x2000 00:06:56.348 Nvme3n1 : 5.98 163.09 10.19 0.00 0.00 651295.15 746.73 1871304.86 00:06:56.348 [2024-11-04T10:07:02.093Z] =================================================================================================================== 00:06:56.348 [2024-11-04T10:07:02.093Z] Total : 1504.60 94.04 0.00 0.00 910954.96 746.73 1871304.86 00:06:58.258 00:06:58.258 real 0m8.937s 00:06:58.258 user 0m16.895s 00:06:58.258 sys 0m0.255s 00:06:58.258 10:07:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:58.258 10:07:03 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:58.258 ************************************ 00:06:58.258 END TEST bdev_verify_big_io 00:06:58.258 ************************************ 00:06:58.258 10:07:03 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.258 10:07:03 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:06:58.258 10:07:03 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:58.258 10:07:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:58.258 ************************************ 00:06:58.258 START TEST bdev_write_zeroes 00:06:58.258 ************************************ 00:06:58.258 10:07:03 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:58.258 [2024-11-04 10:07:03.960757] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:06:58.258 [2024-11-04 10:07:03.960883] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60606 ] 00:06:58.516 [2024-11-04 10:07:04.112005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.516 [2024-11-04 10:07:04.217009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.082 Running I/O for 1 seconds... 00:07:00.455 61440.00 IOPS, 240.00 MiB/s 00:07:00.455 Latency(us) 00:07:00.455 [2024-11-04T10:07:06.200Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.455 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme0n1 : 1.02 10225.67 39.94 0.00 0.00 12495.46 8973.39 22988.01 00:07:00.455 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme1n1 : 1.02 10215.69 39.91 0.00 0.00 12490.92 8822.15 22988.01 00:07:00.455 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme2n1 : 1.02 10206.05 39.87 0.00 0.00 12452.33 8973.39 21979.77 00:07:00.455 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme2n2 : 1.02 10196.05 39.83 0.00 0.00 12423.90 9074.22 20971.52 00:07:00.455 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme2n3 : 1.02 10184.34 39.78 0.00 0.00 12404.43 8318.03 20164.92 00:07:00.455 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.455 Nvme3n1 : 1.03 10172.84 39.74 0.00 0.00 12387.13 7461.02 21173.17 00:07:00.455 [2024-11-04T10:07:06.200Z] =================================================================================================================== 00:07:00.455 [2024-11-04T10:07:06.200Z] Total : 61200.63 239.06 0.00 0.00 12442.36 7461.02 22988.01 00:07:00.726 00:07:00.726 real 0m2.553s 00:07:00.726 user 0m2.262s 00:07:00.726 sys 0m0.174s 00:07:00.726 10:07:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.726 10:07:06 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:00.726 ************************************ 00:07:00.726 END TEST bdev_write_zeroes 00:07:00.726 ************************************ 00:07:00.984 10:07:06 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.984 10:07:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:00.984 10:07:06 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:00.984 10:07:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:00.984 ************************************ 00:07:00.984 START TEST bdev_json_nonenclosed 00:07:00.984 ************************************ 00:07:00.984 10:07:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:00.984 [2024-11-04 10:07:06.549283] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:00.984 [2024-11-04 10:07:06.549393] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60661 ] 00:07:00.984 [2024-11-04 10:07:06.699167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.241 [2024-11-04 10:07:06.788943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.242 [2024-11-04 10:07:06.789025] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:01.242 [2024-11-04 10:07:06.789040] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:01.242 [2024-11-04 10:07:06.789048] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.242 00:07:01.242 real 0m0.473s 00:07:01.242 user 0m0.283s 00:07:01.242 sys 0m0.086s 00:07:01.242 10:07:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.242 10:07:06 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:01.242 ************************************ 00:07:01.242 END TEST bdev_json_nonenclosed 00:07:01.242 ************************************ 00:07:01.499 10:07:06 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.499 10:07:06 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:01.499 10:07:06 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.499 10:07:06 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.499 ************************************ 00:07:01.499 START TEST bdev_json_nonarray 00:07:01.499 ************************************ 00:07:01.499 10:07:06 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.499 [2024-11-04 10:07:07.057599] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:01.499 [2024-11-04 10:07:07.057735] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60681 ] 00:07:01.499 [2024-11-04 10:07:07.210027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.756 [2024-11-04 10:07:07.297595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.756 [2024-11-04 10:07:07.297677] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:01.756 [2024-11-04 10:07:07.297692] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:01.756 [2024-11-04 10:07:07.297700] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:01.756 00:07:01.756 real 0m0.460s 00:07:01.756 user 0m0.272s 00:07:01.756 sys 0m0.084s 00:07:01.756 10:07:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.756 10:07:07 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:01.756 ************************************ 00:07:01.756 END TEST bdev_json_nonarray 00:07:01.756 ************************************ 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:01.756 10:07:07 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:01.756 ************************************ 00:07:01.756 END TEST blockdev_nvme 00:07:01.756 ************************************ 00:07:01.756 00:07:01.756 real 0m37.369s 00:07:01.756 user 0m58.247s 00:07:01.756 sys 0m5.277s 00:07:01.756 10:07:07 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:01.756 10:07:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.013 10:07:07 -- spdk/autotest.sh@209 -- # uname -s 00:07:02.013 10:07:07 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:02.013 10:07:07 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:02.013 10:07:07 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:02.013 10:07:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:02.013 10:07:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.013 ************************************ 00:07:02.013 START TEST blockdev_nvme_gpt 00:07:02.013 ************************************ 00:07:02.013 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:02.013 * Looking for test storage... 00:07:02.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:02.013 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:02.013 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:07:02.013 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:02.013 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:02.013 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.014 10:07:07 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.014 --rc genhtml_branch_coverage=1 00:07:02.014 --rc genhtml_function_coverage=1 00:07:02.014 --rc genhtml_legend=1 00:07:02.014 --rc geninfo_all_blocks=1 00:07:02.014 --rc geninfo_unexecuted_blocks=1 00:07:02.014 00:07:02.014 ' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.014 --rc genhtml_branch_coverage=1 00:07:02.014 --rc genhtml_function_coverage=1 00:07:02.014 --rc genhtml_legend=1 00:07:02.014 --rc geninfo_all_blocks=1 00:07:02.014 --rc geninfo_unexecuted_blocks=1 00:07:02.014 00:07:02.014 ' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.014 --rc genhtml_branch_coverage=1 00:07:02.014 --rc genhtml_function_coverage=1 00:07:02.014 --rc genhtml_legend=1 00:07:02.014 --rc geninfo_all_blocks=1 00:07:02.014 --rc geninfo_unexecuted_blocks=1 00:07:02.014 00:07:02.014 ' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:02.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.014 --rc genhtml_branch_coverage=1 00:07:02.014 --rc genhtml_function_coverage=1 00:07:02.014 --rc genhtml_legend=1 00:07:02.014 --rc geninfo_all_blocks=1 00:07:02.014 --rc geninfo_unexecuted_blocks=1 00:07:02.014 00:07:02.014 ' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:02.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60765 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60765 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60765 ']' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.014 10:07:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:02.014 10:07:07 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:02.014 [2024-11-04 10:07:07.739209] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:02.014 [2024-11-04 10:07:07.739337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60765 ] 00:07:02.271 [2024-11-04 10:07:07.893155] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.271 [2024-11-04 10:07:07.995848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.204 10:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.204 10:07:08 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:07:03.204 10:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:03.204 10:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:03.204 10:07:08 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:03.204 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:03.204 Waiting for block devices as requested 00:07:03.462 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.462 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.462 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:03.462 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:08.721 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:08.721 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:08.721 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:08.721 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:08.722 BYT; 00:07:08.722 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:08.722 BYT; 00:07:08.722 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:08.722 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:08.979 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:08.979 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:08.979 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:09.237 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.237 10:07:14 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.237 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:09.237 10:07:14 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:10.609 The operation has completed successfully. 00:07:10.609 10:07:16 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:11.981 The operation has completed successfully. 00:07:11.981 10:07:17 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:12.499 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.758 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.758 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.758 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:12.758 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.758 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.758 [] 00:07:12.758 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:12.758 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:12.758 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.758 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.016 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.016 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:13.016 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.016 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.016 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.016 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.276 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.276 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:13.276 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:13.276 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.276 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:13.276 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.276 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.276 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:13.276 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:13.277 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "509eabef-afa4-4920-a063-5fff5045c1b4"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "509eabef-afa4-4920-a063-5fff5045c1b4",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "4217d686-4e0a-4993-94c8-d1f2b2b87611"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4217d686-4e0a-4993-94c8-d1f2b2b87611",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "c34de089-7975-49ce-9951-5a667426cb1f"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c34de089-7975-49ce-9951-5a667426cb1f",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "eab9097c-4f0a-420d-99a2-934c548fd493"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eab9097c-4f0a-420d-99a2-934c548fd493",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "0304225a-81da-4a0a-83e1-0a0796de875c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "0304225a-81da-4a0a-83e1-0a0796de875c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:13.277 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:13.277 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:13.277 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:13.277 10:07:18 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60765 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60765 ']' 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60765 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60765 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.277 killing process with pid 60765 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60765' 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60765 00:07:13.277 10:07:18 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60765 00:07:14.670 10:07:20 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:14.670 10:07:20 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.670 10:07:20 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:14.670 10:07:20 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:14.670 10:07:20 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:14.670 ************************************ 00:07:14.670 START TEST bdev_hello_world 00:07:14.670 ************************************ 00:07:14.670 10:07:20 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:14.670 [2024-11-04 10:07:20.203700] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:14.670 [2024-11-04 10:07:20.203847] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61395 ] 00:07:14.670 [2024-11-04 10:07:20.368011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.927 [2024-11-04 10:07:20.474671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.492 [2024-11-04 10:07:21.025371] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:15.492 [2024-11-04 10:07:21.025429] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:15.492 [2024-11-04 10:07:21.025454] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:15.492 [2024-11-04 10:07:21.027954] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:15.492 [2024-11-04 10:07:21.028872] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:15.492 [2024-11-04 10:07:21.028915] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:15.492 [2024-11-04 10:07:21.029087] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:15.492 00:07:15.492 [2024-11-04 10:07:21.029120] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:16.058 00:07:16.058 real 0m1.601s 00:07:16.058 user 0m1.312s 00:07:16.058 sys 0m0.179s 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:16.058 ************************************ 00:07:16.058 END TEST bdev_hello_world 00:07:16.058 ************************************ 00:07:16.058 10:07:21 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:16.058 10:07:21 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:16.058 10:07:21 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:16.058 10:07:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.058 ************************************ 00:07:16.058 START TEST bdev_bounds 00:07:16.058 ************************************ 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61432 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.058 Process bdevio pid: 61432 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61432' 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61432 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61432 ']' 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:16.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:16.058 10:07:21 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.317 [2024-11-04 10:07:21.850534] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:16.317 [2024-11-04 10:07:21.850691] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61432 ] 00:07:16.317 [2024-11-04 10:07:22.019590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.575 [2024-11-04 10:07:22.126014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:16.575 [2024-11-04 10:07:22.126284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.575 [2024-11-04 10:07:22.126413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.200 10:07:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.200 10:07:22 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:17.200 10:07:22 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:17.200 I/O targets: 00:07:17.200 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:17.200 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:17.200 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:17.200 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.200 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.200 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.200 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:17.200 00:07:17.200 00:07:17.200 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.200 http://cunit.sourceforge.net/ 00:07:17.200 00:07:17.200 00:07:17.200 Suite: bdevio tests on: Nvme3n1 00:07:17.200 Test: blockdev write read block ...passed 00:07:17.200 Test: blockdev write zeroes read block ...passed 00:07:17.200 Test: blockdev write zeroes read no split ...passed 00:07:17.200 Test: blockdev write zeroes read split ...passed 00:07:17.200 Test: blockdev write zeroes read split partial ...passed 00:07:17.200 Test: blockdev reset ...[2024-11-04 10:07:22.829614] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:17.200 [2024-11-04 10:07:22.832845] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:17.200 passed 00:07:17.200 Test: blockdev write read 8 blocks ...passed 00:07:17.200 Test: blockdev write read size > 128k ...passed 00:07:17.200 Test: blockdev write read invalid size ...passed 00:07:17.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.200 Test: blockdev write read max offset ...passed 00:07:17.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.200 Test: blockdev writev readv 8 blocks ...passed 00:07:17.200 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.200 Test: blockdev writev readv block ...passed 00:07:17.200 Test: blockdev writev readv size > 128k ...passed 00:07:17.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.200 Test: blockdev comparev and writev ...[2024-11-04 10:07:22.838883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a04000 len:0x1000 00:07:17.200 [2024-11-04 10:07:22.838937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.200 passed 00:07:17.200 Test: blockdev nvme passthru rw ...passed 00:07:17.200 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:07:22.839512] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.200 [2024-11-04 10:07:22.839533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.200 passed 00:07:17.200 Test: blockdev nvme admin passthru ...passed 00:07:17.200 Test: blockdev copy ...passed 00:07:17.200 Suite: bdevio tests on: Nvme2n3 00:07:17.200 Test: blockdev write read block ...passed 00:07:17.200 Test: blockdev write zeroes read block ...passed 00:07:17.200 Test: blockdev write zeroes read no split ...passed 00:07:17.200 Test: blockdev write zeroes read split ...passed 00:07:17.200 Test: blockdev write zeroes read split partial ...passed 00:07:17.200 Test: blockdev reset ...[2024-11-04 10:07:22.882761] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.200 [2024-11-04 10:07:22.886190] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.200 passed 00:07:17.200 Test: blockdev write read 8 blocks ...passed 00:07:17.200 Test: blockdev write read size > 128k ...passed 00:07:17.200 Test: blockdev write read invalid size ...passed 00:07:17.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.200 Test: blockdev write read max offset ...passed 00:07:17.200 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.200 Test: blockdev writev readv 8 blocks ...passed 00:07:17.200 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.200 Test: blockdev writev readv block ...passed 00:07:17.200 Test: blockdev writev readv size > 128k ...passed 00:07:17.200 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.200 Test: blockdev comparev and writev ...[2024-11-04 10:07:22.891915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a02000 len:0x1000 00:07:17.200 passed 00:07:17.200 Test: blockdev nvme passthru rw ...[2024-11-04 10:07:22.891962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.200 passed 00:07:17.200 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.200 Test: blockdev nvme admin passthru ...[2024-11-04 10:07:22.892524] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.200 [2024-11-04 10:07:22.892544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.200 passed 00:07:17.200 Test: blockdev copy ...passed 00:07:17.200 Suite: bdevio tests on: Nvme2n2 00:07:17.200 Test: blockdev write read block ...passed 00:07:17.200 Test: blockdev write zeroes read block ...passed 00:07:17.200 Test: blockdev write zeroes read no split ...passed 00:07:17.200 Test: blockdev write zeroes read split ...passed 00:07:17.200 Test: blockdev write zeroes read split partial ...passed 00:07:17.200 Test: blockdev reset ...[2024-11-04 10:07:22.937349] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.200 [2024-11-04 10:07:22.940595] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.200 passed 00:07:17.200 Test: blockdev write read 8 blocks ...passed 00:07:17.200 Test: blockdev write read size > 128k ...passed 00:07:17.200 Test: blockdev write read invalid size ...passed 00:07:17.200 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.200 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.200 Test: blockdev write read max offset ...passed 00:07:17.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.458 Test: blockdev writev readv 8 blocks ...passed 00:07:17.458 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.458 Test: blockdev writev readv block ...passed 00:07:17.458 Test: blockdev writev readv size > 128k ...passed 00:07:17.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.458 Test: blockdev comparev and writev ...[2024-11-04 10:07:22.946964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc38000 len:0x1000 00:07:17.458 [2024-11-04 10:07:22.947013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.458 passed 00:07:17.458 Test: blockdev nvme passthru rw ...passed 00:07:17.458 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:07:22.947631] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.458 [2024-11-04 10:07:22.947654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.458 passed 00:07:17.458 Test: blockdev nvme admin passthru ...passed 00:07:17.458 Test: blockdev copy ...passed 00:07:17.458 Suite: bdevio tests on: Nvme2n1 00:07:17.458 Test: blockdev write read block ...passed 00:07:17.458 Test: blockdev write zeroes read block ...passed 00:07:17.458 Test: blockdev write zeroes read no split ...passed 00:07:17.458 Test: blockdev write zeroes read split ...passed 00:07:17.458 Test: blockdev write zeroes read split partial ...passed 00:07:17.458 Test: blockdev reset ...[2024-11-04 10:07:22.990812] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:17.458 [2024-11-04 10:07:22.993726] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:17.458 passed 00:07:17.458 Test: blockdev write read 8 blocks ...passed 00:07:17.458 Test: blockdev write read size > 128k ...passed 00:07:17.458 Test: blockdev write read invalid size ...passed 00:07:17.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.458 Test: blockdev write read max offset ...passed 00:07:17.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.458 Test: blockdev writev readv 8 blocks ...passed 00:07:17.458 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.458 Test: blockdev writev readv block ...passed 00:07:17.458 Test: blockdev writev readv size > 128k ...passed 00:07:17.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.458 Test: blockdev comparev and writev ...[2024-11-04 10:07:22.999506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cfc34000 len:0x1000 00:07:17.458 [2024-11-04 10:07:22.999554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.458 passed 00:07:17.458 Test: blockdev nvme passthru rw ...passed 00:07:17.458 Test: blockdev nvme passthru vendor specific ...[2024-11-04 10:07:23.000111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.458 [2024-11-04 10:07:23.000131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.458 passed 00:07:17.458 Test: blockdev nvme admin passthru ...passed 00:07:17.458 Test: blockdev copy ...passed 00:07:17.458 Suite: bdevio tests on: Nvme1n1p2 00:07:17.458 Test: blockdev write read block ...passed 00:07:17.458 Test: blockdev write zeroes read block ...passed 00:07:17.458 Test: blockdev write zeroes read no split ...passed 00:07:17.458 Test: blockdev write zeroes read split ...passed 00:07:17.458 Test: blockdev write zeroes read split partial ...passed 00:07:17.458 Test: blockdev reset ...[2024-11-04 10:07:23.045263] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.458 [2024-11-04 10:07:23.047747] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.458 passed 00:07:17.458 Test: blockdev write read 8 blocks ...passed 00:07:17.458 Test: blockdev write read size > 128k ...passed 00:07:17.458 Test: blockdev write read invalid size ...passed 00:07:17.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.458 Test: blockdev write read max offset ...passed 00:07:17.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.458 Test: blockdev writev readv 8 blocks ...passed 00:07:17.458 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.458 Test: blockdev writev readv block ...passed 00:07:17.458 Test: blockdev writev readv size > 128k ...passed 00:07:17.458 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.458 Test: blockdev comparev and writev ...[2024-11-04 10:07:23.053186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2cfc30000 len:0x1000 00:07:17.458 [2024-11-04 10:07:23.053227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.458 passed 00:07:17.458 Test: blockdev nvme passthru rw ...passed 00:07:17.458 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.458 Test: blockdev nvme admin passthru ...passed 00:07:17.458 Test: blockdev copy ...passed 00:07:17.458 Suite: bdevio tests on: Nvme1n1p1 00:07:17.458 Test: blockdev write read block ...passed 00:07:17.458 Test: blockdev write zeroes read block ...passed 00:07:17.458 Test: blockdev write zeroes read no split ...passed 00:07:17.458 Test: blockdev write zeroes read split ...passed 00:07:17.458 Test: blockdev write zeroes read split partial ...passed 00:07:17.458 Test: blockdev reset ...[2024-11-04 10:07:23.097634] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:17.458 [2024-11-04 10:07:23.100235] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:17.458 passed 00:07:17.458 Test: blockdev write read 8 blocks ...passed 00:07:17.458 Test: blockdev write read size > 128k ...passed 00:07:17.458 Test: blockdev write read invalid size ...passed 00:07:17.458 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.458 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.458 Test: blockdev write read max offset ...passed 00:07:17.458 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.458 Test: blockdev writev readv 8 blocks ...passed 00:07:17.458 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.458 Test: blockdev writev readv block ...passed 00:07:17.459 Test: blockdev writev readv size > 128k ...passed 00:07:17.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.459 Test: blockdev comparev and writev ...[2024-11-04 10:07:23.107819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2aec0e000 len:0x1000 00:07:17.459 [2024-11-04 10:07:23.107935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.459 passed 00:07:17.459 Test: blockdev nvme passthru rw ...passed 00:07:17.459 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.459 Test: blockdev nvme admin passthru ...passed 00:07:17.459 Test: blockdev copy ...passed 00:07:17.459 Suite: bdevio tests on: Nvme0n1 00:07:17.459 Test: blockdev write read block ...passed 00:07:17.459 Test: blockdev write zeroes read block ...passed 00:07:17.459 Test: blockdev write zeroes read no split ...passed 00:07:17.459 Test: blockdev write zeroes read split ...passed 00:07:17.459 Test: blockdev write zeroes read split partial ...passed 00:07:17.459 Test: blockdev reset ...[2024-11-04 10:07:23.154519] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:17.459 [2024-11-04 10:07:23.157150] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:17.459 passed 00:07:17.459 Test: blockdev write read 8 blocks ...passed 00:07:17.459 Test: blockdev write read size > 128k ...passed 00:07:17.459 Test: blockdev write read invalid size ...passed 00:07:17.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.459 Test: blockdev write read max offset ...passed 00:07:17.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.459 Test: blockdev writev readv 8 blocks ...passed 00:07:17.459 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.459 Test: blockdev writev readv block ...passed 00:07:17.459 Test: blockdev writev readv size > 128k ...passed 00:07:17.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.459 Test: blockdev comparev and writev ...[2024-11-04 10:07:23.162353] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:17.459 separate metadata which is not supported yet. 00:07:17.459 passed 00:07:17.459 Test: blockdev nvme passthru rw ...passed 00:07:17.459 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.459 Test: blockdev nvme admin passthru ...[2024-11-04 10:07:23.162753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:17.459 [2024-11-04 10:07:23.162802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:17.459 passed 00:07:17.459 Test: blockdev copy ...passed 00:07:17.459 00:07:17.459 Run Summary: Type Total Ran Passed Failed Inactive 00:07:17.459 suites 7 7 n/a 0 0 00:07:17.459 tests 161 161 161 0 0 00:07:17.459 asserts 1025 1025 1025 0 n/a 00:07:17.459 00:07:17.459 Elapsed time = 1.034 seconds 00:07:17.459 0 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61432 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61432 ']' 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61432 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.459 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61432 00:07:17.716 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:17.716 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:17.716 killing process with pid 61432 00:07:17.716 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61432' 00:07:17.716 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61432 00:07:17.716 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61432 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:18.281 00:07:18.281 real 0m1.972s 00:07:18.281 user 0m4.969s 00:07:18.281 sys 0m0.288s 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:18.281 ************************************ 00:07:18.281 END TEST bdev_bounds 00:07:18.281 ************************************ 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:18.281 10:07:23 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.281 10:07:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:18.281 10:07:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:18.281 10:07:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.281 ************************************ 00:07:18.281 START TEST bdev_nbd 00:07:18.281 ************************************ 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:18.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61486 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61486 /var/tmp/spdk-nbd.sock 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61486 ']' 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:18.281 10:07:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:18.281 [2024-11-04 10:07:23.887568] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:18.281 [2024-11-04 10:07:23.887739] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.539 [2024-11-04 10:07:24.062858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.539 [2024-11-04 10:07:24.149351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:19.104 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:19.105 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:19.105 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:19.105 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.105 10:07:24 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.362 1+0 records in 00:07:19.362 1+0 records out 00:07:19.362 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354326 s, 11.6 MB/s 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.362 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:19.619 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.620 1+0 records in 00:07:19.620 1+0 records out 00:07:19.620 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351268 s, 11.7 MB/s 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.620 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:19.878 1+0 records in 00:07:19.878 1+0 records out 00:07:19.878 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445082 s, 9.2 MB/s 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.878 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.137 1+0 records in 00:07:20.137 1+0 records out 00:07:20.137 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448077 s, 9.1 MB/s 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.137 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.395 1+0 records in 00:07:20.395 1+0 records out 00:07:20.395 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485573 s, 8.4 MB/s 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.395 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.396 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.396 10:07:25 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.396 1+0 records in 00:07:20.396 1+0 records out 00:07:20.396 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000630573 s, 6.5 MB/s 00:07:20.396 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.701 1+0 records in 00:07:20.701 1+0 records out 00:07:20.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438726 s, 9.3 MB/s 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.701 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd0", 00:07:20.980 "bdev_name": "Nvme0n1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd1", 00:07:20.980 "bdev_name": "Nvme1n1p1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd2", 00:07:20.980 "bdev_name": "Nvme1n1p2" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd3", 00:07:20.980 "bdev_name": "Nvme2n1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd4", 00:07:20.980 "bdev_name": "Nvme2n2" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd5", 00:07:20.980 "bdev_name": "Nvme2n3" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd6", 00:07:20.980 "bdev_name": "Nvme3n1" 00:07:20.980 } 00:07:20.980 ]' 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd0", 00:07:20.980 "bdev_name": "Nvme0n1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd1", 00:07:20.980 "bdev_name": "Nvme1n1p1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd2", 00:07:20.980 "bdev_name": "Nvme1n1p2" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd3", 00:07:20.980 "bdev_name": "Nvme2n1" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd4", 00:07:20.980 "bdev_name": "Nvme2n2" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd5", 00:07:20.980 "bdev_name": "Nvme2n3" 00:07:20.980 }, 00:07:20.980 { 00:07:20.980 "nbd_device": "/dev/nbd6", 00:07:20.980 "bdev_name": "Nvme3n1" 00:07:20.980 } 00:07:20.980 ]' 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:20.980 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:21.238 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:21.238 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:21.238 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:21.238 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.239 10:07:26 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.510 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.769 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.027 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.285 10:07:27 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.545 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:22.860 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:23.153 /dev/nbd0 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.153 1+0 records in 00:07:23.153 1+0 records out 00:07:23.153 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335013 s, 12.2 MB/s 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.153 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:23.412 /dev/nbd1 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.412 1+0 records in 00:07:23.412 1+0 records out 00:07:23.412 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000458984 s, 8.9 MB/s 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.412 10:07:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:23.669 /dev/nbd10 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.669 1+0 records in 00:07:23.669 1+0 records out 00:07:23.669 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000374398 s, 10.9 MB/s 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.669 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:23.669 /dev/nbd11 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.928 1+0 records in 00:07:23.928 1+0 records out 00:07:23.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000499555 s, 8.2 MB/s 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:23.928 /dev/nbd12 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.928 1+0 records in 00:07:23.928 1+0 records out 00:07:23.928 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000405129 s, 10.1 MB/s 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.928 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:24.185 /dev/nbd13 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.185 1+0 records in 00:07:24.185 1+0 records out 00:07:24.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541643 s, 7.6 MB/s 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.185 10:07:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:24.442 /dev/nbd14 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.442 1+0 records in 00:07:24.442 1+0 records out 00:07:24.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000614721 s, 6.7 MB/s 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.442 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd0", 00:07:24.700 "bdev_name": "Nvme0n1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd1", 00:07:24.700 "bdev_name": "Nvme1n1p1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd10", 00:07:24.700 "bdev_name": "Nvme1n1p2" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd11", 00:07:24.700 "bdev_name": "Nvme2n1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd12", 00:07:24.700 "bdev_name": "Nvme2n2" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd13", 00:07:24.700 "bdev_name": "Nvme2n3" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd14", 00:07:24.700 "bdev_name": "Nvme3n1" 00:07:24.700 } 00:07:24.700 ]' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd0", 00:07:24.700 "bdev_name": "Nvme0n1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd1", 00:07:24.700 "bdev_name": "Nvme1n1p1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd10", 00:07:24.700 "bdev_name": "Nvme1n1p2" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd11", 00:07:24.700 "bdev_name": "Nvme2n1" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd12", 00:07:24.700 "bdev_name": "Nvme2n2" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd13", 00:07:24.700 "bdev_name": "Nvme2n3" 00:07:24.700 }, 00:07:24.700 { 00:07:24.700 "nbd_device": "/dev/nbd14", 00:07:24.700 "bdev_name": "Nvme3n1" 00:07:24.700 } 00:07:24.700 ]' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:24.700 /dev/nbd1 00:07:24.700 /dev/nbd10 00:07:24.700 /dev/nbd11 00:07:24.700 /dev/nbd12 00:07:24.700 /dev/nbd13 00:07:24.700 /dev/nbd14' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:24.700 /dev/nbd1 00:07:24.700 /dev/nbd10 00:07:24.700 /dev/nbd11 00:07:24.700 /dev/nbd12 00:07:24.700 /dev/nbd13 00:07:24.700 /dev/nbd14' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:24.700 256+0 records in 00:07:24.700 256+0 records out 00:07:24.700 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00927113 s, 113 MB/s 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.700 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:24.701 256+0 records in 00:07:24.701 256+0 records out 00:07:24.701 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0684847 s, 15.3 MB/s 00:07:24.701 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.701 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:24.958 256+0 records in 00:07:24.958 256+0 records out 00:07:24.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0754809 s, 13.9 MB/s 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:24.958 256+0 records in 00:07:24.958 256+0 records out 00:07:24.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741107 s, 14.1 MB/s 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:24.958 256+0 records in 00:07:24.958 256+0 records out 00:07:24.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0734796 s, 14.3 MB/s 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:24.958 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:25.220 256+0 records in 00:07:25.220 256+0 records out 00:07:25.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0741101 s, 14.1 MB/s 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:25.220 256+0 records in 00:07:25.220 256+0 records out 00:07:25.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0722284 s, 14.5 MB/s 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:25.220 256+0 records in 00:07:25.220 256+0 records out 00:07:25.220 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0748111 s, 14.0 MB/s 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.220 10:07:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.478 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.478 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.478 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.478 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.478 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.479 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.479 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.479 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.479 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.479 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.737 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.994 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.252 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.510 10:07:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:26.510 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.769 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:27.027 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:27.285 malloc_lvol_verify 00:07:27.285 10:07:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:27.543 8b087195-bc8e-4dae-a4a5-81fdeff8e915 00:07:27.543 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:27.802 f6ed82bc-a1ea-4a86-a972-b609b06a9b2c 00:07:27.802 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:28.074 /dev/nbd0 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:28.074 mke2fs 1.47.0 (5-Feb-2023) 00:07:28.074 Discarding device blocks: 0/4096 done 00:07:28.074 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:28.074 00:07:28.074 Allocating group tables: 0/1 done 00:07:28.074 Writing inode tables: 0/1 done 00:07:28.074 Creating journal (1024 blocks): done 00:07:28.074 Writing superblocks and filesystem accounting information: 0/1 done 00:07:28.074 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.074 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61486 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61486 ']' 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61486 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61486 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:28.333 killing process with pid 61486 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61486' 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61486 00:07:28.333 10:07:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61486 00:07:29.267 10:07:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:29.267 00:07:29.267 real 0m10.935s 00:07:29.267 user 0m15.795s 00:07:29.267 sys 0m3.564s 00:07:29.267 10:07:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:29.267 10:07:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:29.267 ************************************ 00:07:29.267 END TEST bdev_nbd 00:07:29.267 ************************************ 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:29.267 skipping fio tests on NVMe due to multi-ns failures. 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:29.267 10:07:34 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:29.267 10:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:29.267 10:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:29.267 10:07:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:29.267 ************************************ 00:07:29.267 START TEST bdev_verify 00:07:29.267 ************************************ 00:07:29.267 10:07:34 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:29.267 [2024-11-04 10:07:34.833965] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:29.267 [2024-11-04 10:07:34.834089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61905 ] 00:07:29.267 [2024-11-04 10:07:34.992765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:29.526 [2024-11-04 10:07:35.098809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.526 [2024-11-04 10:07:35.098838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.091 Running I/O for 5 seconds... 00:07:32.440 22464.00 IOPS, 87.75 MiB/s [2024-11-04T10:07:39.134Z] 22048.00 IOPS, 86.12 MiB/s [2024-11-04T10:07:40.066Z] 21909.33 IOPS, 85.58 MiB/s [2024-11-04T10:07:41.015Z] 21552.00 IOPS, 84.19 MiB/s [2024-11-04T10:07:41.015Z] 21491.20 IOPS, 83.95 MiB/s 00:07:35.270 Latency(us) 00:07:35.270 [2024-11-04T10:07:41.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:35.270 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x0 length 0xbd0bd 00:07:35.270 Nvme0n1 : 5.07 1528.33 5.97 0.00 0.00 83363.77 9527.93 83079.48 00:07:35.270 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:35.270 Nvme0n1 : 5.07 1514.68 5.92 0.00 0.00 84323.46 16232.76 80256.39 00:07:35.270 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x0 length 0x4ff80 00:07:35.270 Nvme1n1p1 : 5.09 1535.03 6.00 0.00 0.00 83012.90 16636.06 79046.50 00:07:35.270 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:35.270 Nvme1n1p1 : 5.07 1514.18 5.91 0.00 0.00 84235.03 15526.99 76223.41 00:07:35.270 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x0 length 0x4ff7f 00:07:35.270 Nvme1n1p2 : 5.09 1534.59 5.99 0.00 0.00 82885.26 14922.04 79449.80 00:07:35.270 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:35.270 Nvme1n1p2 : 5.08 1513.11 5.91 0.00 0.00 84110.40 16535.24 73400.32 00:07:35.270 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x0 length 0x80000 00:07:35.270 Nvme2n1 : 5.09 1534.18 5.99 0.00 0.00 82750.33 15325.34 80659.69 00:07:35.270 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.270 Verification LBA range: start 0x80000 length 0x80000 00:07:35.270 Nvme2n1 : 5.08 1512.57 5.91 0.00 0.00 83972.57 17442.66 68964.04 00:07:35.271 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x0 length 0x80000 00:07:35.271 Nvme2n2 : 5.09 1533.25 5.99 0.00 0.00 82595.67 17140.18 82676.18 00:07:35.271 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x80000 length 0x80000 00:07:35.271 Nvme2n2 : 5.08 1512.17 5.91 0.00 0.00 83816.89 17745.13 72997.02 00:07:35.271 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x0 length 0x80000 00:07:35.271 Nvme2n3 : 5.09 1532.83 5.99 0.00 0.00 82439.30 14518.74 83079.48 00:07:35.271 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x80000 length 0x80000 00:07:35.271 Nvme2n3 : 5.08 1511.74 5.91 0.00 0.00 83641.23 17241.01 75416.81 00:07:35.271 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x0 length 0x20000 00:07:35.271 Nvme3n1 : 5.10 1532.42 5.99 0.00 0.00 82306.64 9679.16 83482.78 00:07:35.271 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:35.271 Verification LBA range: start 0x20000 length 0x20000 00:07:35.271 Nvme3n1 : 5.08 1511.30 5.90 0.00 0.00 83498.59 9427.10 80256.39 00:07:35.271 [2024-11-04T10:07:41.016Z] =================================================================================================================== 00:07:35.271 [2024-11-04T10:07:41.016Z] Total : 21320.39 83.28 0.00 0.00 83348.84 9427.10 83482.78 00:07:38.563 00:07:38.563 real 0m8.861s 00:07:38.563 user 0m16.740s 00:07:38.563 sys 0m0.252s 00:07:38.563 10:07:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.563 10:07:43 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 ************************************ 00:07:38.563 END TEST bdev_verify 00:07:38.563 ************************************ 00:07:38.563 10:07:43 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:38.563 10:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:38.563 10:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.563 10:07:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.563 ************************************ 00:07:38.563 START TEST bdev_verify_big_io 00:07:38.563 ************************************ 00:07:38.563 10:07:43 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:38.563 [2024-11-04 10:07:43.729630] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:38.563 [2024-11-04 10:07:43.729749] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62003 ] 00:07:38.563 [2024-11-04 10:07:43.892105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.563 [2024-11-04 10:07:44.003019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.563 [2024-11-04 10:07:44.003024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.137 Running I/O for 5 seconds... 00:07:45.286 663.00 IOPS, 41.44 MiB/s [2024-11-04T10:07:51.031Z] 2297.50 IOPS, 143.59 MiB/s [2024-11-04T10:07:51.309Z] 2815.33 IOPS, 175.96 MiB/s 00:07:45.564 Latency(us) 00:07:45.564 [2024-11-04T10:07:51.309Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.564 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.564 Verification LBA range: start 0x0 length 0xbd0b 00:07:45.564 Nvme0n1 : 6.03 79.60 4.97 0.00 0.00 1506225.76 21072.34 1406705.03 00:07:45.564 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.564 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:45.564 Nvme0n1 : 5.96 93.12 5.82 0.00 0.00 1315054.37 17140.18 1413157.81 00:07:45.564 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.564 Verification LBA range: start 0x0 length 0x4ff8 00:07:45.564 Nvme1n1p1 : 6.11 83.83 5.24 0.00 0.00 1409994.04 78643.20 1406705.03 00:07:45.564 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.564 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:45.564 Nvme1n1p1 : 6.04 95.76 5.98 0.00 0.00 1227326.53 114536.76 1264743.98 00:07:45.565 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x0 length 0x4ff7 00:07:45.565 Nvme1n1p2 : 6.03 89.16 5.57 0.00 0.00 1285425.39 154060.01 1284102.30 00:07:45.565 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:45.565 Nvme1n1p2 : 6.12 92.28 5.77 0.00 0.00 1237838.61 74610.22 2090699.22 00:07:45.565 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x0 length 0x8000 00:07:45.565 Nvme2n1 : 6.11 94.27 5.89 0.00 0.00 1187989.49 72593.72 1219574.55 00:07:45.565 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x8000 length 0x8000 00:07:45.565 Nvme2n1 : 6.12 92.26 5.77 0.00 0.00 1192289.09 75013.51 2116510.33 00:07:45.565 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x0 length 0x8000 00:07:45.565 Nvme2n2 : 6.18 99.22 6.20 0.00 0.00 1092266.45 21576.47 1419610.58 00:07:45.565 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x8000 length 0x8000 00:07:45.565 Nvme2n2 : 6.16 96.73 6.05 0.00 0.00 1103870.18 83482.78 2168132.53 00:07:45.565 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x0 length 0x8000 00:07:45.565 Nvme2n3 : 6.18 103.55 6.47 0.00 0.00 1013924.47 44161.18 1271196.75 00:07:45.565 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x8000 length 0x8000 00:07:45.565 Nvme2n3 : 6.18 105.94 6.62 0.00 0.00 980230.53 17946.78 2219754.73 00:07:45.565 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x0 length 0x2000 00:07:45.565 Nvme3n1 : 6.26 118.62 7.41 0.00 0.00 854808.32 12401.43 1309913.40 00:07:45.565 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:45.565 Verification LBA range: start 0x2000 length 0x2000 00:07:45.565 Nvme3n1 : 6.30 131.81 8.24 0.00 0.00 762688.14 3024.74 2000360.37 00:07:45.565 [2024-11-04T10:07:51.310Z] =================================================================================================================== 00:07:45.565 [2024-11-04T10:07:51.310Z] Total : 1376.15 86.01 0.00 0.00 1127113.49 3024.74 2219754.73 00:07:46.964 00:07:46.964 real 0m8.957s 00:07:46.964 user 0m16.903s 00:07:46.964 sys 0m0.249s 00:07:46.964 10:07:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.964 ************************************ 00:07:46.964 END TEST bdev_verify_big_io 00:07:46.964 ************************************ 00:07:46.964 10:07:52 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:46.964 10:07:52 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:46.964 10:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:46.964 10:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.964 10:07:52 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:46.964 ************************************ 00:07:46.964 START TEST bdev_write_zeroes 00:07:46.964 ************************************ 00:07:46.964 10:07:52 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:47.221 [2024-11-04 10:07:52.750311] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:47.221 [2024-11-04 10:07:52.750434] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62123 ] 00:07:47.221 [2024-11-04 10:07:52.912909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.479 [2024-11-04 10:07:53.017010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.043 Running I/O for 1 seconds... 00:07:48.976 55552.00 IOPS, 217.00 MiB/s 00:07:48.976 Latency(us) 00:07:48.976 [2024-11-04T10:07:54.721Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:48.976 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme0n1 : 1.03 7910.76 30.90 0.00 0.00 16131.17 12149.37 32868.82 00:07:48.976 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme1n1p1 : 1.03 7897.66 30.85 0.00 0.00 16134.56 11846.89 33877.07 00:07:48.976 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme1n1p2 : 1.03 7884.69 30.80 0.00 0.00 16103.67 12149.37 34078.72 00:07:48.976 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme2n1 : 1.03 7872.92 30.75 0.00 0.00 16056.19 11746.07 32062.23 00:07:48.976 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme2n2 : 1.03 7861.30 30.71 0.00 0.00 16039.87 10485.76 31053.98 00:07:48.976 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme2n3 : 1.04 7850.83 30.67 0.00 0.00 16025.69 10032.05 29642.44 00:07:48.976 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:48.976 Nvme3n1 : 1.04 7841.87 30.63 0.00 0.00 16016.15 9931.22 31658.93 00:07:48.976 [2024-11-04T10:07:54.721Z] =================================================================================================================== 00:07:48.976 [2024-11-04T10:07:54.721Z] Total : 55120.02 215.31 0.00 0.00 16072.47 9931.22 34078.72 00:07:49.907 00:07:49.907 real 0m2.785s 00:07:49.907 user 0m2.471s 00:07:49.907 sys 0m0.194s 00:07:49.907 10:07:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.907 ************************************ 00:07:49.907 END TEST bdev_write_zeroes 00:07:49.907 ************************************ 00:07:49.907 10:07:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:49.907 10:07:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.907 10:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:49.907 10:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.907 10:07:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:49.907 ************************************ 00:07:49.907 START TEST bdev_json_nonenclosed 00:07:49.907 ************************************ 00:07:49.907 10:07:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:49.907 [2024-11-04 10:07:55.587052] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:49.907 [2024-11-04 10:07:55.587200] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62176 ] 00:07:50.165 [2024-11-04 10:07:55.748291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.165 [2024-11-04 10:07:55.859812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.165 [2024-11-04 10:07:55.859924] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:50.165 [2024-11-04 10:07:55.859947] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.165 [2024-11-04 10:07:55.859960] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.423 00:07:50.423 real 0m0.535s 00:07:50.423 user 0m0.336s 00:07:50.423 sys 0m0.094s 00:07:50.423 10:07:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.423 10:07:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:50.423 ************************************ 00:07:50.423 END TEST bdev_json_nonenclosed 00:07:50.423 ************************************ 00:07:50.423 10:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.423 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:50.423 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.423 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.423 ************************************ 00:07:50.423 START TEST bdev_json_nonarray 00:07:50.423 ************************************ 00:07:50.423 10:07:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.681 [2024-11-04 10:07:56.174016] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:50.681 [2024-11-04 10:07:56.174158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62202 ] 00:07:50.681 [2024-11-04 10:07:56.339236] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.939 [2024-11-04 10:07:56.454080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.939 [2024-11-04 10:07:56.454171] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:50.939 [2024-11-04 10:07:56.454188] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:50.939 [2024-11-04 10:07:56.454198] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:50.939 00:07:50.939 real 0m0.537s 00:07:50.939 user 0m0.336s 00:07:50.939 sys 0m0.095s 00:07:50.939 10:07:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.939 10:07:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:50.939 ************************************ 00:07:50.939 END TEST bdev_json_nonarray 00:07:50.939 ************************************ 00:07:50.939 10:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:50.939 10:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:50.939 10:07:56 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:50.939 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.939 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.939 10:07:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.197 ************************************ 00:07:51.197 START TEST bdev_gpt_uuid 00:07:51.197 ************************************ 00:07:51.197 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:51.197 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62227 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62227 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62227 ']' 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:51.198 10:07:56 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:51.198 [2024-11-04 10:07:56.763837] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:07:51.198 [2024-11-04 10:07:56.763965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62227 ] 00:07:51.198 [2024-11-04 10:07:56.923588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.456 [2024-11-04 10:07:57.027888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.021 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:52.021 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:52.021 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:52.021 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.021 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.279 Some configs were skipped because the RPC state that can call them passed over. 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:52.279 { 00:07:52.279 "name": "Nvme1n1p1", 00:07:52.279 "aliases": [ 00:07:52.279 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:52.279 ], 00:07:52.279 "product_name": "GPT Disk", 00:07:52.279 "block_size": 4096, 00:07:52.279 "num_blocks": 655104, 00:07:52.279 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.279 "assigned_rate_limits": { 00:07:52.279 "rw_ios_per_sec": 0, 00:07:52.279 "rw_mbytes_per_sec": 0, 00:07:52.279 "r_mbytes_per_sec": 0, 00:07:52.279 "w_mbytes_per_sec": 0 00:07:52.279 }, 00:07:52.279 "claimed": false, 00:07:52.279 "zoned": false, 00:07:52.279 "supported_io_types": { 00:07:52.279 "read": true, 00:07:52.279 "write": true, 00:07:52.279 "unmap": true, 00:07:52.279 "flush": true, 00:07:52.279 "reset": true, 00:07:52.279 "nvme_admin": false, 00:07:52.279 "nvme_io": false, 00:07:52.279 "nvme_io_md": false, 00:07:52.279 "write_zeroes": true, 00:07:52.279 "zcopy": false, 00:07:52.279 "get_zone_info": false, 00:07:52.279 "zone_management": false, 00:07:52.279 "zone_append": false, 00:07:52.279 "compare": true, 00:07:52.279 "compare_and_write": false, 00:07:52.279 "abort": true, 00:07:52.279 "seek_hole": false, 00:07:52.279 "seek_data": false, 00:07:52.279 "copy": true, 00:07:52.279 "nvme_iov_md": false 00:07:52.279 }, 00:07:52.279 "driver_specific": { 00:07:52.279 "gpt": { 00:07:52.279 "base_bdev": "Nvme1n1", 00:07:52.279 "offset_blocks": 256, 00:07:52.279 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:52.279 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:52.279 "partition_name": "SPDK_TEST_first" 00:07:52.279 } 00:07:52.279 } 00:07:52.279 } 00:07:52.279 ]' 00:07:52.279 10:07:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:52.537 { 00:07:52.537 "name": "Nvme1n1p2", 00:07:52.537 "aliases": [ 00:07:52.537 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:52.537 ], 00:07:52.537 "product_name": "GPT Disk", 00:07:52.537 "block_size": 4096, 00:07:52.537 "num_blocks": 655103, 00:07:52.537 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.537 "assigned_rate_limits": { 00:07:52.537 "rw_ios_per_sec": 0, 00:07:52.537 "rw_mbytes_per_sec": 0, 00:07:52.537 "r_mbytes_per_sec": 0, 00:07:52.537 "w_mbytes_per_sec": 0 00:07:52.537 }, 00:07:52.537 "claimed": false, 00:07:52.537 "zoned": false, 00:07:52.537 "supported_io_types": { 00:07:52.537 "read": true, 00:07:52.537 "write": true, 00:07:52.537 "unmap": true, 00:07:52.537 "flush": true, 00:07:52.537 "reset": true, 00:07:52.537 "nvme_admin": false, 00:07:52.537 "nvme_io": false, 00:07:52.537 "nvme_io_md": false, 00:07:52.537 "write_zeroes": true, 00:07:52.537 "zcopy": false, 00:07:52.537 "get_zone_info": false, 00:07:52.537 "zone_management": false, 00:07:52.537 "zone_append": false, 00:07:52.537 "compare": true, 00:07:52.537 "compare_and_write": false, 00:07:52.537 "abort": true, 00:07:52.537 "seek_hole": false, 00:07:52.537 "seek_data": false, 00:07:52.537 "copy": true, 00:07:52.537 "nvme_iov_md": false 00:07:52.537 }, 00:07:52.537 "driver_specific": { 00:07:52.537 "gpt": { 00:07:52.537 "base_bdev": "Nvme1n1", 00:07:52.537 "offset_blocks": 655360, 00:07:52.537 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:52.537 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:52.537 "partition_name": "SPDK_TEST_second" 00:07:52.537 } 00:07:52.537 } 00:07:52.537 } 00:07:52.537 ]' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62227 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62227 ']' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62227 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62227 00:07:52.537 killing process with pid 62227 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62227' 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62227 00:07:52.537 10:07:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62227 00:07:54.439 ************************************ 00:07:54.439 END TEST bdev_gpt_uuid 00:07:54.439 ************************************ 00:07:54.439 00:07:54.439 real 0m3.029s 00:07:54.439 user 0m3.174s 00:07:54.439 sys 0m0.373s 00:07:54.439 10:07:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.439 10:07:59 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:54.439 10:07:59 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:54.439 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:54.698 Waiting for block devices as requested 00:07:54.698 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.698 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.698 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:54.955 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:00.226 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:00.226 10:08:05 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:00.226 10:08:05 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:00.226 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:00.226 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:00.226 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:00.226 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:00.226 10:08:05 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:00.226 00:08:00.226 real 0m58.364s 00:08:00.226 user 1m15.549s 00:08:00.226 sys 0m7.702s 00:08:00.226 ************************************ 00:08:00.226 END TEST blockdev_nvme_gpt 00:08:00.226 ************************************ 00:08:00.226 10:08:05 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.226 10:08:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.226 10:08:05 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:00.226 10:08:05 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:00.226 10:08:05 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.226 10:08:05 -- common/autotest_common.sh@10 -- # set +x 00:08:00.226 ************************************ 00:08:00.226 START TEST nvme 00:08:00.226 ************************************ 00:08:00.226 10:08:05 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:00.485 * Looking for test storage... 00:08:00.485 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.485 10:08:06 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.485 10:08:06 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.485 10:08:06 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.485 10:08:06 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.485 10:08:06 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.485 10:08:06 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:00.485 10:08:06 nvme -- scripts/common.sh@345 -- # : 1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.485 10:08:06 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.485 10:08:06 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@353 -- # local d=1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.485 10:08:06 nvme -- scripts/common.sh@355 -- # echo 1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.485 10:08:06 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@353 -- # local d=2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.485 10:08:06 nvme -- scripts/common.sh@355 -- # echo 2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.485 10:08:06 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.485 10:08:06 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.485 10:08:06 nvme -- scripts/common.sh@368 -- # return 0 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.485 --rc genhtml_branch_coverage=1 00:08:00.485 --rc genhtml_function_coverage=1 00:08:00.485 --rc genhtml_legend=1 00:08:00.485 --rc geninfo_all_blocks=1 00:08:00.485 --rc geninfo_unexecuted_blocks=1 00:08:00.485 00:08:00.485 ' 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.485 --rc genhtml_branch_coverage=1 00:08:00.485 --rc genhtml_function_coverage=1 00:08:00.485 --rc genhtml_legend=1 00:08:00.485 --rc geninfo_all_blocks=1 00:08:00.485 --rc geninfo_unexecuted_blocks=1 00:08:00.485 00:08:00.485 ' 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.485 --rc genhtml_branch_coverage=1 00:08:00.485 --rc genhtml_function_coverage=1 00:08:00.485 --rc genhtml_legend=1 00:08:00.485 --rc geninfo_all_blocks=1 00:08:00.485 --rc geninfo_unexecuted_blocks=1 00:08:00.485 00:08:00.485 ' 00:08:00.485 10:08:06 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.485 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.485 --rc genhtml_branch_coverage=1 00:08:00.485 --rc genhtml_function_coverage=1 00:08:00.485 --rc genhtml_legend=1 00:08:00.485 --rc geninfo_all_blocks=1 00:08:00.485 --rc geninfo_unexecuted_blocks=1 00:08:00.485 00:08:00.485 ' 00:08:00.485 10:08:06 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:01.049 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.615 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:01.615 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:01.615 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:01.615 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:01.615 10:08:07 nvme -- nvme/nvme.sh@79 -- # uname 00:08:01.615 10:08:07 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:01.615 10:08:07 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:01.615 10:08:07 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:08:01.615 Waiting for stub to ready for secondary processes... 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1073 -- # stubpid=62862 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62862 ]] 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:01.615 10:08:07 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:08:01.615 [2024-11-04 10:08:07.266945] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:08:01.615 [2024-11-04 10:08:07.267072] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:02.554 [2024-11-04 10:08:08.075315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:02.554 [2024-11-04 10:08:08.174080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.554 [2024-11-04 10:08:08.174626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.554 [2024-11-04 10:08:08.174748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.554 [2024-11-04 10:08:08.194064] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:02.554 [2024-11-04 10:08:08.194350] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.554 [2024-11-04 10:08:08.206598] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:02.554 [2024-11-04 10:08:08.206797] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:02.554 [2024-11-04 10:08:08.212128] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.554 [2024-11-04 10:08:08.212370] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:02.554 [2024-11-04 10:08:08.212412] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:02.554 [2024-11-04 10:08:08.214385] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.554 [2024-11-04 10:08:08.214540] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:02.554 [2024-11-04 10:08:08.214589] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:02.554 [2024-11-04 10:08:08.217120] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:02.554 [2024-11-04 10:08:08.217277] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:02.554 [2024-11-04 10:08:08.217321] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:02.554 [2024-11-04 10:08:08.217354] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:02.554 [2024-11-04 10:08:08.217381] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:02.554 done. 00:08:02.554 10:08:08 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:02.554 10:08:08 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:08:02.554 10:08:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:02.554 10:08:08 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:08:02.554 10:08:08 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.554 10:08:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:02.554 ************************************ 00:08:02.554 START TEST nvme_reset 00:08:02.554 ************************************ 00:08:02.554 10:08:08 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:02.812 Initializing NVMe Controllers 00:08:02.812 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:02.812 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:02.812 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:02.812 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:02.812 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:02.812 ************************************ 00:08:02.812 END TEST nvme_reset 00:08:02.812 ************************************ 00:08:02.812 00:08:02.812 real 0m0.259s 00:08:02.812 user 0m0.082s 00:08:02.812 sys 0m0.124s 00:08:02.812 10:08:08 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:02.812 10:08:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:02.812 10:08:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:02.812 10:08:08 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:02.812 10:08:08 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:02.812 10:08:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:03.070 ************************************ 00:08:03.070 START TEST nvme_identify 00:08:03.070 ************************************ 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:08:03.070 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:03.070 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:03.070 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:03.070 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:03.070 10:08:08 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:03.070 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:03.071 [2024-11-04 10:08:08.808073] nvme_ctrlr.c:3627:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62883 terminated unexpected 00:08:03.071 ===================================================== 00:08:03.071 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.071 ===================================================== 00:08:03.071 Controller Capabilities/Features 00:08:03.071 ================================ 00:08:03.071 Vendor ID: 1b36 00:08:03.071 Subsystem Vendor ID: 1af4 00:08:03.071 Serial Number: 12340 00:08:03.071 Model Number: QEMU NVMe Ctrl 00:08:03.071 Firmware Version: 8.0.0 00:08:03.071 Recommended Arb Burst: 6 00:08:03.071 IEEE OUI Identifier: 00 54 52 00:08:03.071 Multi-path I/O 00:08:03.071 May have multiple subsystem ports: No 00:08:03.071 May have multiple controllers: No 00:08:03.071 Associated with SR-IOV VF: No 00:08:03.071 Max Data Transfer Size: 524288 00:08:03.071 Max Number of Namespaces: 256 00:08:03.071 Max Number of I/O Queues: 64 00:08:03.071 NVMe Specification Version (VS): 1.4 00:08:03.071 NVMe Specification Version (Identify): 1.4 00:08:03.071 Maximum Queue Entries: 2048 00:08:03.071 Contiguous Queues Required: Yes 00:08:03.071 Arbitration Mechanisms Supported 00:08:03.071 Weighted Round Robin: Not Supported 00:08:03.071 Vendor Specific: Not Supported 00:08:03.071 Reset Timeout: 7500 ms 00:08:03.071 Doorbell Stride: 4 bytes 00:08:03.071 NVM Subsystem Reset: Not Supported 00:08:03.071 Command Sets Supported 00:08:03.071 NVM Command Set: Supported 00:08:03.071 Boot Partition: Not Supported 00:08:03.071 Memory Page Size Minimum: 4096 bytes 00:08:03.071 Memory Page Size Maximum: 65536 bytes 00:08:03.071 Persistent Memory Region: Not Supported 00:08:03.071 Optional Asynchronous Events Supported 00:08:03.071 Namespace Attribute Notices: Supported 00:08:03.071 Firmware Activation Notices: Not Supported 00:08:03.071 ANA Change Notices: Not Supported 00:08:03.071 PLE Aggregate Log Change Notices: Not Supported 00:08:03.071 LBA Status Info Alert Notices: Not Supported 00:08:03.071 EGE Aggregate Log Change Notices: Not Supported 00:08:03.071 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.071 Zone Descriptor Change Notices: Not Supported 00:08:03.071 Discovery Log Change Notices: Not Supported 00:08:03.071 Controller Attributes 00:08:03.071 128-bit Host Identifier: Not Supported 00:08:03.071 Non-Operational Permissive Mode: Not Supported 00:08:03.071 NVM Sets: Not Supported 00:08:03.071 Read Recovery Levels: Not Supported 00:08:03.071 Endurance Groups: Not Supported 00:08:03.071 Predictable Latency Mode: Not Supported 00:08:03.071 Traffic Based Keep ALive: Not Supported 00:08:03.071 Namespace Granularity: Not Supported 00:08:03.071 SQ Associations: Not Supported 00:08:03.071 UUID List: Not Supported 00:08:03.071 Multi-Domain Subsystem: Not Supported 00:08:03.071 Fixed Capacity Management: Not Supported 00:08:03.071 Variable Capacity Management: Not Supported 00:08:03.071 Delete Endurance Group: Not Supported 00:08:03.071 Delete NVM Set: Not Supported 00:08:03.071 Extended LBA Formats Supported: Supported 00:08:03.071 Flexible Data Placement Supported: Not Supported 00:08:03.071 00:08:03.071 Controller Memory Buffer Support 00:08:03.071 ================================ 00:08:03.071 Supported: No 00:08:03.071 00:08:03.071 Persistent Memory Region Support 00:08:03.071 ================================ 00:08:03.071 Supported: No 00:08:03.071 00:08:03.071 Admin Command Set Attributes 00:08:03.071 ============================ 00:08:03.071 Security Send/Receive: Not Supported 00:08:03.071 Format NVM: Supported 00:08:03.071 Firmware Activate/Download: Not Supported 00:08:03.071 Namespace Management: Supported 00:08:03.071 Device Self-Test: Not Supported 00:08:03.071 Directives: Supported 00:08:03.071 NVMe-MI: Not Supported 00:08:03.071 Virtualization Management: Not Supported 00:08:03.071 Doorbell Buffer Config: Supported 00:08:03.071 Get LBA Status Capability: Not Supported 00:08:03.071 Command & Feature Lockdown Capability: Not Supported 00:08:03.071 Abort Command Limit: 4 00:08:03.071 Async Event Request Limit: 4 00:08:03.071 Number of Firmware Slots: N/A 00:08:03.071 Firmware Slot 1 Read-Only: N/A 00:08:03.071 Firmware Activation Without Reset: N/A 00:08:03.071 Multiple Update Detection Support: N/A 00:08:03.331 Firmware Update Granularity: No Information Provided 00:08:03.331 Per-Namespace SMART Log: Yes 00:08:03.331 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.331 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:03.331 Command Effects Log Page: Supported 00:08:03.332 Get Log Page Extended Data: Supported 00:08:03.332 Telemetry Log Pages: Not Supported 00:08:03.332 Persistent Event Log Pages: Not Supported 00:08:03.332 Supported Log Pages Log Page: May Support 00:08:03.332 Commands Supported & Effects Log Page: Not Supported 00:08:03.332 Feature Identifiers & Effects Log Page:May Support 00:08:03.332 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.332 Data Area 4 for Telemetry Log: Not Supported 00:08:03.332 Error Log Page Entries Supported: 1 00:08:03.332 Keep Alive: Not Supported 00:08:03.332 00:08:03.332 NVM Command Set Attributes 00:08:03.332 ========================== 00:08:03.332 Submission Queue Entry Size 00:08:03.332 Max: 64 00:08:03.332 Min: 64 00:08:03.332 Completion Queue Entry Size 00:08:03.332 Max: 16 00:08:03.332 Min: 16 00:08:03.332 Number of Namespaces: 256 00:08:03.332 Compare Command: Supported 00:08:03.332 Write Uncorrectable Command: Not Supported 00:08:03.332 Dataset Management Command: Supported 00:08:03.332 Write Zeroes Command: Supported 00:08:03.332 Set Features Save Field: Supported 00:08:03.332 Reservations: Not Supported 00:08:03.332 Timestamp: Supported 00:08:03.332 Copy: Supported 00:08:03.332 Volatile Write Cache: Present 00:08:03.332 Atomic Write Unit (Normal): 1 00:08:03.332 Atomic Write Unit (PFail): 1 00:08:03.332 Atomic Compare & Write Unit: 1 00:08:03.332 Fused Compare & Write: Not Supported 00:08:03.332 Scatter-Gather List 00:08:03.332 SGL Command Set: Supported 00:08:03.332 SGL Keyed: Not Supported 00:08:03.332 SGL Bit Bucket Descriptor: Not Supported 00:08:03.332 SGL Metadata Pointer: Not Supported 00:08:03.332 Oversized SGL: Not Supported 00:08:03.332 SGL Metadata Address: Not Supported 00:08:03.332 SGL Offset: Not Supported 00:08:03.332 Transport SGL Data Block: Not Supported 00:08:03.332 Replay Protected Memory Block: Not Supported 00:08:03.332 00:08:03.332 Firmware Slot Information 00:08:03.332 ========================= 00:08:03.332 Active slot: 1 00:08:03.332 Slot 1 Firmware Revision: 1.0 00:08:03.332 00:08:03.332 00:08:03.332 Commands Supported and Effects 00:08:03.332 ============================== 00:08:03.332 Admin Commands 00:08:03.332 -------------- 00:08:03.332 Delete I/O Submission Queue (00h): Supported 00:08:03.332 Create I/O Submission Queue (01h): Supported 00:08:03.332 Get Log Page (02h): Supported 00:08:03.332 Delete I/O Completion Queue (04h): Supported 00:08:03.332 Create I/O Completion Queue (05h): Supported 00:08:03.332 Identify (06h): Supported 00:08:03.332 Abort (08h): Supported 00:08:03.332 Set Features (09h): Supported 00:08:03.332 Get Features (0Ah): Supported 00:08:03.332 Asynchronous Event Request (0Ch): Supported 00:08:03.332 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.332 Directive Send (19h): Supported 00:08:03.332 Directive Receive (1Ah): Supported 00:08:03.332 Virtualization Management (1Ch): Supported 00:08:03.332 Doorbell Buffer Config (7Ch): Supported 00:08:03.332 Format NVM (80h): Supported LBA-Change 00:08:03.332 I/O Commands 00:08:03.332 ------------ 00:08:03.332 Flush (00h): Supported LBA-Change 00:08:03.332 Write (01h): Supported LBA-Change 00:08:03.332 Read (02h): Supported 00:08:03.332 Compare (05h): Supported 00:08:03.332 Write Zeroes (08h): Supported LBA-Change 00:08:03.332 Dataset Management (09h): Supported LBA-Change 00:08:03.332 Unknown (0Ch): Supported 00:08:03.332 Unknown (12h): Supported 00:08:03.332 Copy (19h): Supported LBA-Change 00:08:03.332 Unknown (1Dh): Supported LBA-Change 00:08:03.332 00:08:03.332 Error Log 00:08:03.332 ========= 00:08:03.332 00:08:03.332 Arbitration 00:08:03.332 =========== 00:08:03.332 Arbitration Burst: no limit 00:08:03.332 00:08:03.332 Power Management 00:08:03.332 ================ 00:08:03.332 Number of Power States: 1 00:08:03.332 Current Power State: Power State #0 00:08:03.332 Power State #0: 00:08:03.332 Max Power: 25.00 W 00:08:03.332 Non-Operational State: Operational 00:08:03.332 Entry Latency: 16 microseconds 00:08:03.332 Exit Latency: 4 microseconds 00:08:03.332 Relative Read Throughput: 0 00:08:03.332 Relative Read Latency: 0 00:08:03.332 Relative Write Throughput: 0 00:08:03.332 Relative Write Latency: 0 00:08:03.332 Idle Power: Not Reported 00:08:03.332 Active Power: Not Reported 00:08:03.332 Non-Operational Permissive Mode: Not Supported 00:08:03.332 00:08:03.332 Health Information 00:08:03.332 ================== 00:08:03.332 Critical Warnings: 00:08:03.332 Available Spare Space: OK 00:08:03.332 Temperature: OK 00:08:03.332 Device Reliability: OK 00:08:03.332 Read Only: No 00:08:03.332 Volatile Memory Backup: OK 00:08:03.332 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.332 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.332 Available Spare: 0% 00:08:03.332 Available Spare Threshold: 0% 00:08:03.332 Life Percentage Used: 0% 00:08:03.332 Data Units Read: 665 00:08:03.332 Data Units Written: 593 00:08:03.332 Host Read Commands: 38877 00:08:03.332 Host Write Commands: 38663 00:08:03.332 Controller Busy Time: 0 minutes 00:08:03.332 Power Cycles: 0 00:08:03.332 Power On Hours: 0 hours 00:08:03.332 Unsafe Shutdowns: 0 00:08:03.332 Unrecoverable Media Errors: 0 00:08:03.332 Lifetime Error Log Entries: 0 00:08:03.332 Warning Temperature Time: 0 minutes 00:08:03.332 Critical Temperature Time: 0 minutes 00:08:03.332 00:08:03.332 Number of Queues 00:08:03.332 ================ 00:08:03.332 Number of I/O Submission Queues: 64 00:08:03.332 Number of I/O Completion Queues: 64 00:08:03.332 00:08:03.332 ZNS Specific Controller Data 00:08:03.332 ============================ 00:08:03.332 Zone Append Size Limit: 0 00:08:03.332 00:08:03.332 00:08:03.332 Active Namespaces 00:08:03.332 ================= 00:08:03.332 Namespace ID:1 00:08:03.332 Error Recovery Timeout: Unlimited 00:08:03.332 Command Set Identifier: NVM (00h) 00:08:03.332 Deallocate: Supported 00:08:03.332 Deallocated/Unwritten Error: Supported 00:08:03.332 Deallocated Read Value: All 0x00 00:08:03.332 Deallocate in Write Zeroes: Not Supported 00:08:03.332 Deallocated Guard Field: 0xFFFF 00:08:03.332 Flush: Supported 00:08:03.332 Reservation: Not Supported 00:08:03.332 Metadata Transferred as: Separate Metadata Buffer 00:08:03.332 Namespace Sharing Capabilities: Private 00:08:03.332 Size (in LBAs): 1548666 (5GiB) 00:08:03.332 Capacity (in LBAs): 1548666 (5GiB) 00:08:03.332 Utilization (in LBAs): 1548666 (5GiB) 00:08:03.332 Thin Provisioning: Not Supported 00:08:03.332 Per-NS Atomic Units: No 00:08:03.332 Maximum Single Source Range Length: 128 00:08:03.332 Maximum Copy Length: 128 00:08:03.332 Maximum Source Range Count: 128 00:08:03.332 NGUID/EUI64 Never Reused: No 00:08:03.332 Namespace Write Protected: No 00:08:03.332 Number of LBA Formats: 8 00:08:03.332 Current LBA Format: LBA Format #07 00:08:03.332 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.332 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.332 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.332 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.332 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.332 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.332 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.332 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.332 00:08:03.332 NVM Specific Namespace Data 00:08:03.332 =========================== 00:08:03.332 Logical Block Storage Tag Mask: 0 00:08:03.332 Protection Information Capabilities: 00:08:03.332 16b Guard Protection Information Storage Tag Support: No 00:08:03.332 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.332 Storage Tag Check Read Support: No 00:08:03.332 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.332 ===================================================== 00:08:03.332 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.332 ===================================================== 00:08:03.332 Controller Capabilities/Features 00:08:03.332 ================================ 00:08:03.332 Vendor ID: 1b36 00:08:03.332 Subsystem Vendor ID: 1af4 00:08:03.332 Serial Number: 12341 00:08:03.332 Model Number: QEMU NVMe Ctrl 00:08:03.333 Firmware Version: 8.0.0 00:08:03.333 Recommended Arb Burst: 6 00:08:03.333 IEEE OUI Identifier: 00 54 52 00:08:03.333 Multi-path I/O 00:08:03.333 May have multiple subsystem ports: No 00:08:03.333 May have multiple controllers: No 00:08:03.333 Associated with SR-IOV VF: No 00:08:03.333 Max Data Transfer Size: 524288 00:08:03.333 Max Number of Namespaces: 256 00:08:03.333 Max Number of I/O Queues: 64 00:08:03.333 NVMe Specification Version (VS): 1.4 00:08:03.333 NVMe Specification Version (Identify): 1.4 00:08:03.333 Maximum Queue Entries: 2048 00:08:03.333 Contiguous Queues Required: Yes 00:08:03.333 Arbitration Mechanisms Supported 00:08:03.333 Weighted Round Robin: Not Supported 00:08:03.333 Vendor Specific: Not Supported 00:08:03.333 Reset Timeout: 7500 ms 00:08:03.333 Doorbell Stride: 4 bytes 00:08:03.333 NVM Subsystem Reset: Not Supported 00:08:03.333 Command Sets Supported 00:08:03.333 NVM Command Set: Supported 00:08:03.333 Boot Partition: Not Supported 00:08:03.333 Memory Page Size Minimum: 4096 bytes 00:08:03.333 Memory Page Size Maximum: 65536 bytes 00:08:03.333 Persistent Memory Region: Not Supported 00:08:03.333 Optional Asynchronous Events Supported 00:08:03.333 Namespace Attribute Notices: Supported 00:08:03.333 Firmware Activation Notices: Not Supported 00:08:03.333 ANA Change Notices: Not Supported 00:08:03.333 PLE Aggregate Log Change Notices: Not Supported 00:08:03.333 LBA Status Info Alert Notices: Not Supported 00:08:03.333 EGE Aggregate Log Change Notices: Not Supported 00:08:03.333 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.333 Zone Descriptor Change Notices: Not Supported 00:08:03.333 Discovery Log Change Notices: Not Supported 00:08:03.333 Controller Attributes 00:08:03.333 128-bit Host Identifier: Not Supported 00:08:03.333 Non-Operational Permissive Mode: Not Supported 00:08:03.333 NVM Sets: Not Supported 00:08:03.333 Read Recovery Levels: Not Supported 00:08:03.333 Endurance Groups: Not Supported 00:08:03.333 Predictable Latency Mode: Not Supported 00:08:03.333 Traffic Based Keep ALive: Not Supported 00:08:03.333 Namespace Granularity: Not Supported 00:08:03.333 SQ Associations: Not Supported 00:08:03.333 UUID List: Not Supported 00:08:03.333 Multi-Domain Subsystem: Not Supported 00:08:03.333 Fixed Capacity Management: Not Supported 00:08:03.333 Variable Capacity Management: Not Supported 00:08:03.333 Delete Endurance Group: Not Supported 00:08:03.333 Delete NVM Set: Not Supported 00:08:03.333 Extended LBA Formats Supported: Supported 00:08:03.333 Flexible Data Placement Supported: Not Supported 00:08:03.333 00:08:03.333 Controller Memory Buffer Support 00:08:03.333 ================================ 00:08:03.333 Supported: No 00:08:03.333 00:08:03.333 Persistent Memory Region Support 00:08:03.333 ================================ 00:08:03.333 Supported: No 00:08:03.333 00:08:03.333 Admin Command Set Attributes 00:08:03.333 ============================ 00:08:03.333 Security Send/Receive: Not Supported 00:08:03.333 Format NVM: Supported 00:08:03.333 Firmware Activate/Download: Not Supported 00:08:03.333 Namespace Management: Supported 00:08:03.333 Device Self-Test: Not Supported 00:08:03.333 Directives: Supported 00:08:03.333 NVMe-MI: Not Supported 00:08:03.333 Virtualization Management: Not Supported 00:08:03.333 Doorbell Buffer Config: Supported 00:08:03.333 Get LBA Status Capability: Not Supported 00:08:03.333 Command & Feature Lockdown Capability: Not Supported 00:08:03.333 Abort Command Limit: 4 00:08:03.333 Async Event Request Limit: 4 00:08:03.333 Number of Firmware Slots: N/A 00:08:03.333 Firmware Slot 1 Read-Only: N/A 00:08:03.333 Firmware Activation Without Reset: N/A 00:08:03.333 Multiple Update Detection Support: N/A 00:08:03.333 Firmware Update Granularity: No Information Provided 00:08:03.333 Per-Namespace SMART Log: Yes 00:08:03.333 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.333 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:03.333 Command Effects Log Page: Supported 00:08:03.333 Get Log Page Extended Data: Supported 00:08:03.333 Telemetry Log Pages: Not Supported 00:08:03.333 Persistent Event Log Pages: Not Supported 00:08:03.333 Supported Log Pages Log Page: May Support 00:08:03.333 Commands Supported & Effects Log Page: Not Supported 00:08:03.333 Feature Identifiers & Effects Log Page:May Support 00:08:03.333 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.333 Data Area 4 for Telemetry Log: Not Supported 00:08:03.333 Error Log Page Entries Supported: 1 00:08:03.333 Keep Alive: Not Supported 00:08:03.333 00:08:03.333 NVM Command Set Attributes 00:08:03.333 ========================== 00:08:03.333 Submission Queue Entry Size 00:08:03.333 Max: 64 00:08:03.333 Min: 64 00:08:03.333 Completion Queue Entry Size 00:08:03.333 Max: 16 00:08:03.333 Min: 16 00:08:03.333 Number of Namespaces: 256 00:08:03.333 Compare Command: Supported 00:08:03.333 Write Uncorrectable Command: Not Supported 00:08:03.333 Dataset Management Command: Supported 00:08:03.333 Write Zeroes Command: Supported 00:08:03.333 Set Features Save Field: Supported 00:08:03.333 Reservations: Not Supported 00:08:03.333 Timestamp: Supported 00:08:03.333 Copy: Supported 00:08:03.333 Volatile Write Cache: Present 00:08:03.333 Atomic Write Unit (Normal): 1 00:08:03.333 Atomic Write Unit (PFail): 1 00:08:03.333 Atomic Compare & Write Unit: 1 00:08:03.333 Fused Compare & Write: Not Supported 00:08:03.333 Scatter-Gather List 00:08:03.333 SGL Command Set: Supported 00:08:03.333 SGL Keyed: Not Supported 00:08:03.333 SGL Bit Bucket Descriptor: Not Supported 00:08:03.333 SGL Metadata Pointer: Not Supported 00:08:03.333 Oversized SGL: Not Supported 00:08:03.333 SGL Metadata Address: Not Supported 00:08:03.333 SGL Offset: Not Supported 00:08:03.333 Transport SGL Data Block: Not Supported 00:08:03.333 Replay Protected Memory Block: Not Supported 00:08:03.333 00:08:03.333 Firmware Slot Information 00:08:03.333 ========================= 00:08:03.333 Active slot: 1 00:08:03.333 Slot 1 Firmware Revision: 1.0 00:08:03.333 00:08:03.333 00:08:03.333 Commands Supported and Effects 00:08:03.333 ============================== 00:08:03.333 Admin Commands 00:08:03.333 -------------- 00:08:03.333 Delete I/O Submission Queue (00h): Supported 00:08:03.333 Create I/O Submission Queue (01h): Supported 00:08:03.333 Get Log Page (02h): Supported 00:08:03.333 Delete I/O Completion Queue (04h): Supported 00:08:03.333 Create I/O Completion Queue (05h): Supported 00:08:03.333 Identify (06h): Supported 00:08:03.333 Abort (08h): Supported 00:08:03.333 Set Features (09h): Supported 00:08:03.333 Get Features (0Ah): Supported 00:08:03.333 Asynchronous Event Request (0Ch): Supported 00:08:03.333 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.333 Directive Send (19h): Supported 00:08:03.333 Directive Receive (1Ah): Supported 00:08:03.333 Virtualization Management (1Ch): Supported 00:08:03.333 Doorbell Buffer Config (7Ch): Supported 00:08:03.333 Format NVM (80h): Supported LBA-Change 00:08:03.333 I/O Commands 00:08:03.333 ------------ 00:08:03.333 Flush (00h): Supported LBA-Change 00:08:03.333 Write (01h): Supported LBA-Change 00:08:03.333 Read (02h): Supported 00:08:03.333 Compare (05h): Supported 00:08:03.333 Write Zeroes (08h): Supported LBA-Change 00:08:03.333 Dataset Management (09h): Supported LBA-Change 00:08:03.333 Unknown (0Ch): Supported 00:08:03.333 Unknown (12h): Supported 00:08:03.333 Copy (19h): Supported LBA-Change 00:08:03.333 Unknown (1Dh): Supported LBA-Change 00:08:03.333 00:08:03.333 Error Log 00:08:03.333 ========= 00:08:03.333 00:08:03.333 Arbitration 00:08:03.333 =========== 00:08:03.333 Arbitration Burst: no limit 00:08:03.333 00:08:03.333 Power Management 00:08:03.333 ================ 00:08:03.333 Number of Power States: 1 00:08:03.333 Current Power State: Power State #0 00:08:03.333 Power State #0: 00:08:03.333 Max Power: 25.00 W 00:08:03.333 Non-Operational State: Operational 00:08:03.333 Entry Latency: 16 microseconds 00:08:03.333 Exit Latency: 4 microseconds 00:08:03.333 Relative Read Throughput: 0 00:08:03.333 Relative Read Latency: 0 00:08:03.333 Relative Write Throughput: 0 00:08:03.333 Relative Write Latency: 0 00:08:03.333 Idle Power: Not Reported 00:08:03.333 Active Power: Not Reported 00:08:03.333 Non-Operational Permissive Mode: Not Supported 00:08:03.333 00:08:03.333 Health Information 00:08:03.333 ================== 00:08:03.333 Critical Warnings: 00:08:03.333 Available Spare Space: OK 00:08:03.333 Temperature: OK 00:08:03.333 Device Reliability: OK 00:08:03.333 Read Only: No 00:08:03.333 Volatile Memory Backup: OK 00:08:03.334 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.334 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.334 Available Spare: 0% 00:08:03.334 Available Spare Threshold: 0% 00:08:03.334 Life Percentage Used: 0% 00:08:03.334 Data Units Read: 1003 00:08:03.334 Data Units Written: 868 00:08:03.334 Host Read Commands: 56695 00:08:03.334 Host Write Commands: 55448 00:08:03.334 Controller Busy Time: 0 minutes 00:08:03.334 Power Cycles: 0 00:08:03.334 Power On Hours: 0 hours 00:08:03.334 Unsafe Shutdowns: 0 00:08:03.334 Unrecoverable Media Errors: 0 00:08:03.334 Lifetime Error Log Entries: 0 00:08:03.334 Warning Temperature Time: 0 minutes 00:08:03.334 Critical Temperature Time: 0 minutes 00:08:03.334 00:08:03.334 Number of Queues 00:08:03.334 ================ 00:08:03.334 Number of I/O Submission Queues: 64 00:08:03.334 Number of I/O Completion Queues: 64 00:08:03.334 00:08:03.334 ZNS Specific Controller Data 00:08:03.334 ============================ 00:08:03.334 Zone Append Size Limit: 0 00:08:03.334 00:08:03.334 00:08:03.334 Active Namespaces 00:08:03.334 ================= 00:08:03.334 Namespace ID:1 00:08:03.334 Error Recovery Timeout: Unlimited 00:08:03.334 Command Set Identifier: NVM (00h) 00:08:03.334 Deallocate: Supported 00:08:03.334 Deallocated/Unwritten Error: Supported 00:08:03.334 Deallocated Read Value: All 0x00 00:08:03.334 Deallocate in Write Zeroes: Not Supported 00:08:03.334 Deallocated Guard Field: 0xFFFF 00:08:03.334 Flush: Supported 00:08:03.334 Reservation: Not Supported 00:08:03.334 Namespace Sharing Capabilities: Private 00:08:03.334 Size (in LBAs): 1310720 (5GiB) 00:08:03.334 Capacity (in LBAs): 1310720 (5GiB) 00:08:03.334 Utilization (in LBAs): 1310720 (5GiB) 00:08:03.334 Thin Provisioning: Not Supported 00:08:03.334 Per-NS Atomic Units: No 00:08:03.334 Maximum Single Source Range Length: 128 00:08:03.334 Maximum Copy Length: 128 00:08:03.334 Maximum Source Range Count: 128 00:08:03.334 NGUID/EUI64 Never Reused: No 00:08:03.334 Namespace Write Protected: No 00:08:03.334 Number of LBA Formats: 8 00:08:03.334 Current LBA Format: LBA Format #04 00:08:03.334 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.334 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.334 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.334 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.334 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.334 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.334 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.334 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.334 00:08:03.334 NVM Specific Namespace Data 00:08:03.334 =========================== 00:08:03.334 Logical Block Storage Tag Mask: 0 00:08:03.334 Protection Information Capabilities: 00:08:03.334 16b Guard Protection Information Storage Tag Support: No 00:08:03.334 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.334 Storage Tag Check Read Support: No 00:08:03.334 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.334 ===================================================== 00:08:03.334 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:03.334 ===================================================== 00:08:03.334 Controller Capabilities/Features 00:08:03.334 ================================ 00:08:03.334 Vendor ID: 1b36 00:08:03.334 Subsystem Vendor ID: 1af4 00:08:03.334 Serial Number: 12343 00:08:03.334 Model Number: QEMU NVMe Ctrl 00:08:03.334 Firmware Version: 8.0.0 00:08:03.334 Recommended Arb Burst: 6 00:08:03.334 IEEE OUI Identifier: 00 54 52 00:08:03.334 Multi-path I/O 00:08:03.334 May have multiple subsystem ports: No 00:08:03.334 May have multiple controllers: Yes 00:08:03.334 Associated with SR-IOV VF: No 00:08:03.334 Max Data Transfer Size: 524288 00:08:03.334 Max Number of Namespaces: 256 00:08:03.334 Max Number of I/O Queues: 64 00:08:03.334 NVMe Specification Version (VS): 1.4 00:08:03.334 NVMe Specification Version (Identify): 1.4 00:08:03.334 Maximum Queue Entries: 2048 00:08:03.334 Contiguous Queues Required: Yes 00:08:03.334 Arbitration Mechanisms Supported 00:08:03.334 Weighted Round Robin: Not Supported 00:08:03.334 Vendor Specific: Not Supported 00:08:03.334 Reset Timeout: 7500 ms 00:08:03.334 Doorbell Stride: 4 bytes 00:08:03.334 NVM Subsystem Reset: Not Supported 00:08:03.334 Command Sets Supported 00:08:03.334 NVM Command Set: Supported 00:08:03.334 Boot Partition: Not Supported 00:08:03.334 Memory Page Size Minimum: 4096 bytes 00:08:03.334 Memory Page Size Maximum: 65536 bytes 00:08:03.334 Persistent Memory Region: Not Supported 00:08:03.334 Optional Asynchronous Events Supported 00:08:03.334 Namespace Attribute Notices: Supported 00:08:03.334 Firmware Activation Notices: Not Supported 00:08:03.334 ANA Change Notices: Not Supported 00:08:03.334 PLE Aggregate Log Change Notices: Not Supported 00:08:03.334 LBA Status Info Alert Notices: Not Supported 00:08:03.334 EGE Aggregate Log Change Notices: Not Supported 00:08:03.334 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.334 Zone Descriptor Change Notices: Not Supported 00:08:03.334 Discovery Log Change Notices: Not Supported 00:08:03.334 Controller Attributes 00:08:03.334 128-bit Host Identifier: Not Supported 00:08:03.334 Non-Operational Permissive Mode: Not Supported 00:08:03.334 NVM Sets: Not Supported 00:08:03.334 Read Recovery Levels: Not Supported 00:08:03.334 Endurance Groups: Supported 00:08:03.334 Predictable Latency Mode: Not Supported 00:08:03.334 Traffic Based Keep ALive: Not Supported 00:08:03.334 Namespace Granularity: Not Supported 00:08:03.334 SQ Associations: Not Supported 00:08:03.334 UUID List: Not Supported 00:08:03.334 Multi-Domain Subsystem: Not Supported 00:08:03.334 Fixed Capacity Management: Not Supported 00:08:03.334 Variable Capacity Management: Not Supported 00:08:03.334 Delete Endurance Group: Not Supported 00:08:03.334 Delete NVM Set: Not Supported 00:08:03.334 Extended LBA Formats Supported: Supported 00:08:03.334 Flexible Data Placement Supported: Supported 00:08:03.334 00:08:03.334 Controller Memory Buffer Support 00:08:03.334 ================================ 00:08:03.334 Supported: No 00:08:03.334 00:08:03.334 Persistent Memory Region Support 00:08:03.334 ================================ 00:08:03.334 Supported: No 00:08:03.334 00:08:03.334 Admin Command Set Attributes 00:08:03.334 ============================ 00:08:03.334 Security Send/Receive: Not Supported 00:08:03.334 Format NVM: Supported 00:08:03.334 Firmware Activate/Download: Not Supported 00:08:03.334 Namespace Management: Supported 00:08:03.334 Device Self-Test: Not Supported 00:08:03.334 Directives: Supported 00:08:03.334 NVMe-MI: Not Supported 00:08:03.334 Virtualization Management: Not Supported 00:08:03.334 Doorbell Buffer Config: Supported 00:08:03.334 Get LBA Status Capability: Not Supported 00:08:03.334 Command & Feature Lockdown Capability: Not Supported 00:08:03.334 Abort Command Limit: 4 00:08:03.334 Async Event Request Limit: 4 00:08:03.334 Number of Firmware Slots: N/A 00:08:03.334 Firmware Slot 1 Read-Only: N/A 00:08:03.334 Firmware Activation Without Reset: N/A 00:08:03.334 Multiple Update Detection Support: N/A 00:08:03.334 Firmware Update Granularity: No Information Provided 00:08:03.334 Per-Namespace SMART Log: Yes 00:08:03.334 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.334 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:03.334 Command Effects Log Page: Supported 00:08:03.334 Get Log Page Extended Data: Supported 00:08:03.334 Telemetry Log Pages: Not Supported 00:08:03.334 Persistent Event Log Pages: Not Supported 00:08:03.334 Supported Log Pages Log Page: May Support 00:08:03.334 Commands Supported & Effects Log Page: Not Supported 00:08:03.334 Feature Identifiers & Effects Log Page:May Support 00:08:03.334 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.334 Data Area 4 for Telemetry Log: Not Supported 00:08:03.334 Error Log Page Entries Supported: 1 00:08:03.334 Keep Alive: Not Supported 00:08:03.334 00:08:03.334 NVM Command Set Attributes 00:08:03.334 ========================== 00:08:03.334 Submission Queue Entry Size 00:08:03.334 Max: 64 00:08:03.334 Min: 64 00:08:03.334 Completion Queue Entry Size 00:08:03.335 Max: 16 00:08:03.335 Min: 16 00:08:03.335 Number of Namespaces: 256 00:08:03.335 Compare Command: Supported 00:08:03.335 Write Uncorrectable Command: Not Supported 00:08:03.335 Dataset Management Command: Supported 00:08:03.335 Write Zeroes Command: Supported 00:08:03.335 Set Features Save Field: Supported 00:08:03.335 Reservations: Not Supported 00:08:03.335 Timestamp: Supported 00:08:03.335 Copy: Supported 00:08:03.335 Volatile Write Cache: Present 00:08:03.335 Atomic Write Unit (Normal): 1 00:08:03.335 Atomic Write Unit (PFail): 1 00:08:03.335 Atomic Compare & Write Unit: 1 00:08:03.335 Fused Compare & Write: Not Supported 00:08:03.335 Scatter-Gather List 00:08:03.335 SGL Command Set: Supported 00:08:03.335 SGL Keyed: Not Supported 00:08:03.335 SGL Bit Bucket Descriptor: Not Supported 00:08:03.335 SGL Metadata Pointer: Not Supported 00:08:03.335 Oversized SGL: Not Supported 00:08:03.335 SGL Metadata Address: Not Supported 00:08:03.335 SGL Offset: Not Supported 00:08:03.335 Transport SGL Data Block: Not Supported 00:08:03.335 Replay Protected Memory Block: Not Supported 00:08:03.335 00:08:03.335 Firmware Slot Information 00:08:03.335 ========================= 00:08:03.335 Active slot: 1 00:08:03.335 Slot 1 Firmware Revision: 1.0 00:08:03.335 00:08:03.335 00:08:03.335 Commands Supported and Effects 00:08:03.335 ============================== 00:08:03.335 Admin Commands 00:08:03.335 -------------- 00:08:03.335 Delete I/O Submission Queue (00h): Supported 00:08:03.335 Create I/O Submission Queue (01h): Supported 00:08:03.335 Get Log Page (02h): Supported 00:08:03.335 Delete I/O Completion Queue (04h): Supported 00:08:03.335 Create I/O Completion Queue (05h): Supported 00:08:03.335 Identify (06h): Supported 00:08:03.335 Abort (08h): Supported 00:08:03.335 Set Features (09h): Supported 00:08:03.335 Get Features (0Ah): Supported 00:08:03.335 Asynchronous Event Request (0Ch): Supported 00:08:03.335 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.335 Directive Send (19h): Supported 00:08:03.335 Directive Receive (1Ah): Supported 00:08:03.335 Virtualization Management (1Ch): Supported 00:08:03.335 Doorbell Buffer Config (7Ch): Supported 00:08:03.335 Format NVM (80h): Supported LBA-Change 00:08:03.335 I/O Commands 00:08:03.335 ------------ 00:08:03.335 Flush (00h): Supported LBA-Change 00:08:03.335 Write (01h): Supported LBA-Change 00:08:03.335 Read (02h): Supported 00:08:03.335 Compare (05h): Supported 00:08:03.335 Write Zeroes (08h): Supported LBA-Change 00:08:03.335 Dataset Management (09h): Supported LBA-Change 00:08:03.335 Unknown (0Ch): Supported 00:08:03.335 Unknown (12h): Supported 00:08:03.335 Copy (19h): Supported LBA-Change 00:08:03.335 Unknown (1Dh): Supported LBA-Change 00:08:03.335 00:08:03.335 Error Log 00:08:03.335 ========= 00:08:03.335 00:08:03.335 Arbitration 00:08:03.335 =========== 00:08:03.335 Arbitration Burst: no limit 00:08:03.335 00:08:03.335 Power Management 00:08:03.335 ================ 00:08:03.335 Number of Power States: 1 00:08:03.335 Current Power State: Power State #0 00:08:03.335 Power State #0: 00:08:03.335 Max Power: 25.00 W 00:08:03.335 Non-Operational State: Operational 00:08:03.335 Entry Latency: 16 microseconds 00:08:03.335 Exit Latency: 4 microseconds 00:08:03.335 Relative Read Throughput: 0 00:08:03.335 Relative Read Latency: 0 00:08:03.335 Relative Write Throughput: 0 00:08:03.335 Relative Write Latency: 0 00:08:03.335 Idle Power: Not Reported 00:08:03.335 Active Power: Not Reported 00:08:03.335 Non-Operational Permissive Mode: Not Supported 00:08:03.335 00:08:03.335 Health Information 00:08:03.335 ================== 00:08:03.335 Critical Warnings: 00:08:03.335 Available Spare Space: OK 00:08:03.335 Temperature: OK 00:08:03.335 Device Reliability: OK 00:08:03.335 Read Only: No 00:08:03.335 Volatile Memory Backup: OK 00:08:03.335 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.335 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.335 Available Spare: 0% 00:08:03.335 Available Spare Threshold: 0% 00:08:03.335 Life Percentage Used: [2024-11-04 10:08:08.810162] nvme_ctrlr.c:3627:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62883 terminated unexpected 00:08:03.335 [2024-11-04 10:08:08.811722] nvme_ctrlr.c:3627:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62883 terminated unexpected 00:08:03.335 0% 00:08:03.335 Data Units Read: 804 00:08:03.335 Data Units Written: 733 00:08:03.335 Host Read Commands: 40379 00:08:03.335 Host Write Commands: 39802 00:08:03.335 Controller Busy Time: 0 minutes 00:08:03.335 Power Cycles: 0 00:08:03.335 Power On Hours: 0 hours 00:08:03.335 Unsafe Shutdowns: 0 00:08:03.335 Unrecoverable Media Errors: 0 00:08:03.335 Lifetime Error Log Entries: 0 00:08:03.335 Warning Temperature Time: 0 minutes 00:08:03.335 Critical Temperature Time: 0 minutes 00:08:03.335 00:08:03.335 Number of Queues 00:08:03.335 ================ 00:08:03.335 Number of I/O Submission Queues: 64 00:08:03.335 Number of I/O Completion Queues: 64 00:08:03.335 00:08:03.335 ZNS Specific Controller Data 00:08:03.335 ============================ 00:08:03.335 Zone Append Size Limit: 0 00:08:03.335 00:08:03.335 00:08:03.335 Active Namespaces 00:08:03.335 ================= 00:08:03.335 Namespace ID:1 00:08:03.335 Error Recovery Timeout: Unlimited 00:08:03.335 Command Set Identifier: NVM (00h) 00:08:03.335 Deallocate: Supported 00:08:03.335 Deallocated/Unwritten Error: Supported 00:08:03.335 Deallocated Read Value: All 0x00 00:08:03.335 Deallocate in Write Zeroes: Not Supported 00:08:03.335 Deallocated Guard Field: 0xFFFF 00:08:03.335 Flush: Supported 00:08:03.335 Reservation: Not Supported 00:08:03.335 Namespace Sharing Capabilities: Multiple Controllers 00:08:03.335 Size (in LBAs): 262144 (1GiB) 00:08:03.335 Capacity (in LBAs): 262144 (1GiB) 00:08:03.335 Utilization (in LBAs): 262144 (1GiB) 00:08:03.335 Thin Provisioning: Not Supported 00:08:03.335 Per-NS Atomic Units: No 00:08:03.335 Maximum Single Source Range Length: 128 00:08:03.335 Maximum Copy Length: 128 00:08:03.335 Maximum Source Range Count: 128 00:08:03.335 NGUID/EUI64 Never Reused: No 00:08:03.335 Namespace Write Protected: No 00:08:03.335 Endurance group ID: 1 00:08:03.335 Number of LBA Formats: 8 00:08:03.335 Current LBA Format: LBA Format #04 00:08:03.335 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.335 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.335 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.335 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.335 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.335 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.335 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.335 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.335 00:08:03.335 Get Feature FDP: 00:08:03.335 ================ 00:08:03.335 Enabled: Yes 00:08:03.335 FDP configuration index: 0 00:08:03.335 00:08:03.335 FDP configurations log page 00:08:03.335 =========================== 00:08:03.335 Number of FDP configurations: 1 00:08:03.335 Version: 0 00:08:03.335 Size: 112 00:08:03.335 FDP Configuration Descriptor: 0 00:08:03.335 Descriptor Size: 96 00:08:03.335 Reclaim Group Identifier format: 2 00:08:03.335 FDP Volatile Write Cache: Not Present 00:08:03.335 FDP Configuration: Valid 00:08:03.335 Vendor Specific Size: 0 00:08:03.335 Number of Reclaim Groups: 2 00:08:03.335 Number of Recalim Unit Handles: 8 00:08:03.335 Max Placement Identifiers: 128 00:08:03.335 Number of Namespaces Suppprted: 256 00:08:03.335 Reclaim unit Nominal Size: 6000000 bytes 00:08:03.335 Estimated Reclaim Unit Time Limit: Not Reported 00:08:03.335 RUH Desc #000: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #001: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #002: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #003: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #004: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #005: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #006: RUH Type: Initially Isolated 00:08:03.335 RUH Desc #007: RUH Type: Initially Isolated 00:08:03.335 00:08:03.335 FDP reclaim unit handle usage log page 00:08:03.335 ====================================== 00:08:03.335 Number of Reclaim Unit Handles: 8 00:08:03.335 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:03.335 RUH Usage Desc #001: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #002: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #003: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #004: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #005: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #006: RUH Attributes: Unused 00:08:03.335 RUH Usage Desc #007: RUH Attributes: Unused 00:08:03.335 00:08:03.336 FDP statistics log page 00:08:03.336 ======================= 00:08:03.336 Host bytes with metadata written: 466853888 00:08:03.336 Medi[2024-11-04 10:08:08.815374] nvme_ctrlr.c:3627:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62883 terminated unexpected 00:08:03.336 a bytes with metadata written: 466907136 00:08:03.336 Media bytes erased: 0 00:08:03.336 00:08:03.336 FDP events log page 00:08:03.336 =================== 00:08:03.336 Number of FDP events: 0 00:08:03.336 00:08:03.336 NVM Specific Namespace Data 00:08:03.336 =========================== 00:08:03.336 Logical Block Storage Tag Mask: 0 00:08:03.336 Protection Information Capabilities: 00:08:03.336 16b Guard Protection Information Storage Tag Support: No 00:08:03.336 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.336 Storage Tag Check Read Support: No 00:08:03.336 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.336 ===================================================== 00:08:03.336 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.336 ===================================================== 00:08:03.336 Controller Capabilities/Features 00:08:03.336 ================================ 00:08:03.336 Vendor ID: 1b36 00:08:03.336 Subsystem Vendor ID: 1af4 00:08:03.336 Serial Number: 12342 00:08:03.336 Model Number: QEMU NVMe Ctrl 00:08:03.336 Firmware Version: 8.0.0 00:08:03.336 Recommended Arb Burst: 6 00:08:03.336 IEEE OUI Identifier: 00 54 52 00:08:03.336 Multi-path I/O 00:08:03.336 May have multiple subsystem ports: No 00:08:03.336 May have multiple controllers: No 00:08:03.336 Associated with SR-IOV VF: No 00:08:03.336 Max Data Transfer Size: 524288 00:08:03.336 Max Number of Namespaces: 256 00:08:03.336 Max Number of I/O Queues: 64 00:08:03.336 NVMe Specification Version (VS): 1.4 00:08:03.336 NVMe Specification Version (Identify): 1.4 00:08:03.336 Maximum Queue Entries: 2048 00:08:03.336 Contiguous Queues Required: Yes 00:08:03.336 Arbitration Mechanisms Supported 00:08:03.336 Weighted Round Robin: Not Supported 00:08:03.336 Vendor Specific: Not Supported 00:08:03.336 Reset Timeout: 7500 ms 00:08:03.336 Doorbell Stride: 4 bytes 00:08:03.336 NVM Subsystem Reset: Not Supported 00:08:03.336 Command Sets Supported 00:08:03.336 NVM Command Set: Supported 00:08:03.336 Boot Partition: Not Supported 00:08:03.336 Memory Page Size Minimum: 4096 bytes 00:08:03.336 Memory Page Size Maximum: 65536 bytes 00:08:03.336 Persistent Memory Region: Not Supported 00:08:03.336 Optional Asynchronous Events Supported 00:08:03.336 Namespace Attribute Notices: Supported 00:08:03.336 Firmware Activation Notices: Not Supported 00:08:03.336 ANA Change Notices: Not Supported 00:08:03.336 PLE Aggregate Log Change Notices: Not Supported 00:08:03.336 LBA Status Info Alert Notices: Not Supported 00:08:03.336 EGE Aggregate Log Change Notices: Not Supported 00:08:03.336 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.336 Zone Descriptor Change Notices: Not Supported 00:08:03.336 Discovery Log Change Notices: Not Supported 00:08:03.336 Controller Attributes 00:08:03.336 128-bit Host Identifier: Not Supported 00:08:03.336 Non-Operational Permissive Mode: Not Supported 00:08:03.336 NVM Sets: Not Supported 00:08:03.336 Read Recovery Levels: Not Supported 00:08:03.336 Endurance Groups: Not Supported 00:08:03.336 Predictable Latency Mode: Not Supported 00:08:03.336 Traffic Based Keep ALive: Not Supported 00:08:03.336 Namespace Granularity: Not Supported 00:08:03.336 SQ Associations: Not Supported 00:08:03.336 UUID List: Not Supported 00:08:03.336 Multi-Domain Subsystem: Not Supported 00:08:03.336 Fixed Capacity Management: Not Supported 00:08:03.336 Variable Capacity Management: Not Supported 00:08:03.336 Delete Endurance Group: Not Supported 00:08:03.336 Delete NVM Set: Not Supported 00:08:03.336 Extended LBA Formats Supported: Supported 00:08:03.336 Flexible Data Placement Supported: Not Supported 00:08:03.336 00:08:03.336 Controller Memory Buffer Support 00:08:03.336 ================================ 00:08:03.336 Supported: No 00:08:03.336 00:08:03.336 Persistent Memory Region Support 00:08:03.336 ================================ 00:08:03.336 Supported: No 00:08:03.336 00:08:03.336 Admin Command Set Attributes 00:08:03.336 ============================ 00:08:03.336 Security Send/Receive: Not Supported 00:08:03.336 Format NVM: Supported 00:08:03.336 Firmware Activate/Download: Not Supported 00:08:03.336 Namespace Management: Supported 00:08:03.336 Device Self-Test: Not Supported 00:08:03.336 Directives: Supported 00:08:03.336 NVMe-MI: Not Supported 00:08:03.336 Virtualization Management: Not Supported 00:08:03.336 Doorbell Buffer Config: Supported 00:08:03.336 Get LBA Status Capability: Not Supported 00:08:03.336 Command & Feature Lockdown Capability: Not Supported 00:08:03.336 Abort Command Limit: 4 00:08:03.336 Async Event Request Limit: 4 00:08:03.336 Number of Firmware Slots: N/A 00:08:03.336 Firmware Slot 1 Read-Only: N/A 00:08:03.336 Firmware Activation Without Reset: N/A 00:08:03.336 Multiple Update Detection Support: N/A 00:08:03.336 Firmware Update Granularity: No Information Provided 00:08:03.336 Per-Namespace SMART Log: Yes 00:08:03.336 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.336 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:03.336 Command Effects Log Page: Supported 00:08:03.336 Get Log Page Extended Data: Supported 00:08:03.336 Telemetry Log Pages: Not Supported 00:08:03.336 Persistent Event Log Pages: Not Supported 00:08:03.336 Supported Log Pages Log Page: May Support 00:08:03.336 Commands Supported & Effects Log Page: Not Supported 00:08:03.336 Feature Identifiers & Effects Log Page:May Support 00:08:03.336 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.336 Data Area 4 for Telemetry Log: Not Supported 00:08:03.336 Error Log Page Entries Supported: 1 00:08:03.336 Keep Alive: Not Supported 00:08:03.336 00:08:03.336 NVM Command Set Attributes 00:08:03.336 ========================== 00:08:03.336 Submission Queue Entry Size 00:08:03.336 Max: 64 00:08:03.336 Min: 64 00:08:03.336 Completion Queue Entry Size 00:08:03.336 Max: 16 00:08:03.336 Min: 16 00:08:03.336 Number of Namespaces: 256 00:08:03.336 Compare Command: Supported 00:08:03.336 Write Uncorrectable Command: Not Supported 00:08:03.336 Dataset Management Command: Supported 00:08:03.336 Write Zeroes Command: Supported 00:08:03.336 Set Features Save Field: Supported 00:08:03.336 Reservations: Not Supported 00:08:03.336 Timestamp: Supported 00:08:03.336 Copy: Supported 00:08:03.336 Volatile Write Cache: Present 00:08:03.336 Atomic Write Unit (Normal): 1 00:08:03.336 Atomic Write Unit (PFail): 1 00:08:03.336 Atomic Compare & Write Unit: 1 00:08:03.336 Fused Compare & Write: Not Supported 00:08:03.336 Scatter-Gather List 00:08:03.336 SGL Command Set: Supported 00:08:03.336 SGL Keyed: Not Supported 00:08:03.337 SGL Bit Bucket Descriptor: Not Supported 00:08:03.337 SGL Metadata Pointer: Not Supported 00:08:03.337 Oversized SGL: Not Supported 00:08:03.337 SGL Metadata Address: Not Supported 00:08:03.337 SGL Offset: Not Supported 00:08:03.337 Transport SGL Data Block: Not Supported 00:08:03.337 Replay Protected Memory Block: Not Supported 00:08:03.337 00:08:03.337 Firmware Slot Information 00:08:03.337 ========================= 00:08:03.337 Active slot: 1 00:08:03.337 Slot 1 Firmware Revision: 1.0 00:08:03.337 00:08:03.337 00:08:03.337 Commands Supported and Effects 00:08:03.337 ============================== 00:08:03.337 Admin Commands 00:08:03.337 -------------- 00:08:03.337 Delete I/O Submission Queue (00h): Supported 00:08:03.337 Create I/O Submission Queue (01h): Supported 00:08:03.337 Get Log Page (02h): Supported 00:08:03.337 Delete I/O Completion Queue (04h): Supported 00:08:03.337 Create I/O Completion Queue (05h): Supported 00:08:03.337 Identify (06h): Supported 00:08:03.337 Abort (08h): Supported 00:08:03.337 Set Features (09h): Supported 00:08:03.337 Get Features (0Ah): Supported 00:08:03.337 Asynchronous Event Request (0Ch): Supported 00:08:03.337 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.337 Directive Send (19h): Supported 00:08:03.337 Directive Receive (1Ah): Supported 00:08:03.337 Virtualization Management (1Ch): Supported 00:08:03.337 Doorbell Buffer Config (7Ch): Supported 00:08:03.337 Format NVM (80h): Supported LBA-Change 00:08:03.337 I/O Commands 00:08:03.337 ------------ 00:08:03.337 Flush (00h): Supported LBA-Change 00:08:03.337 Write (01h): Supported LBA-Change 00:08:03.337 Read (02h): Supported 00:08:03.337 Compare (05h): Supported 00:08:03.337 Write Zeroes (08h): Supported LBA-Change 00:08:03.337 Dataset Management (09h): Supported LBA-Change 00:08:03.337 Unknown (0Ch): Supported 00:08:03.337 Unknown (12h): Supported 00:08:03.337 Copy (19h): Supported LBA-Change 00:08:03.337 Unknown (1Dh): Supported LBA-Change 00:08:03.337 00:08:03.337 Error Log 00:08:03.337 ========= 00:08:03.337 00:08:03.337 Arbitration 00:08:03.337 =========== 00:08:03.337 Arbitration Burst: no limit 00:08:03.337 00:08:03.337 Power Management 00:08:03.337 ================ 00:08:03.337 Number of Power States: 1 00:08:03.337 Current Power State: Power State #0 00:08:03.337 Power State #0: 00:08:03.337 Max Power: 25.00 W 00:08:03.337 Non-Operational State: Operational 00:08:03.337 Entry Latency: 16 microseconds 00:08:03.337 Exit Latency: 4 microseconds 00:08:03.337 Relative Read Throughput: 0 00:08:03.337 Relative Read Latency: 0 00:08:03.337 Relative Write Throughput: 0 00:08:03.337 Relative Write Latency: 0 00:08:03.337 Idle Power: Not Reported 00:08:03.337 Active Power: Not Reported 00:08:03.337 Non-Operational Permissive Mode: Not Supported 00:08:03.337 00:08:03.337 Health Information 00:08:03.337 ================== 00:08:03.337 Critical Warnings: 00:08:03.337 Available Spare Space: OK 00:08:03.337 Temperature: OK 00:08:03.337 Device Reliability: OK 00:08:03.337 Read Only: No 00:08:03.337 Volatile Memory Backup: OK 00:08:03.337 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.337 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.337 Available Spare: 0% 00:08:03.337 Available Spare Threshold: 0% 00:08:03.337 Life Percentage Used: 0% 00:08:03.337 Data Units Read: 2117 00:08:03.337 Data Units Written: 1905 00:08:03.337 Host Read Commands: 118543 00:08:03.337 Host Write Commands: 116812 00:08:03.337 Controller Busy Time: 0 minutes 00:08:03.337 Power Cycles: 0 00:08:03.337 Power On Hours: 0 hours 00:08:03.337 Unsafe Shutdowns: 0 00:08:03.337 Unrecoverable Media Errors: 0 00:08:03.337 Lifetime Error Log Entries: 0 00:08:03.337 Warning Temperature Time: 0 minutes 00:08:03.337 Critical Temperature Time: 0 minutes 00:08:03.337 00:08:03.337 Number of Queues 00:08:03.337 ================ 00:08:03.337 Number of I/O Submission Queues: 64 00:08:03.337 Number of I/O Completion Queues: 64 00:08:03.337 00:08:03.337 ZNS Specific Controller Data 00:08:03.337 ============================ 00:08:03.337 Zone Append Size Limit: 0 00:08:03.337 00:08:03.337 00:08:03.337 Active Namespaces 00:08:03.337 ================= 00:08:03.337 Namespace ID:1 00:08:03.337 Error Recovery Timeout: Unlimited 00:08:03.337 Command Set Identifier: NVM (00h) 00:08:03.337 Deallocate: Supported 00:08:03.337 Deallocated/Unwritten Error: Supported 00:08:03.337 Deallocated Read Value: All 0x00 00:08:03.337 Deallocate in Write Zeroes: Not Supported 00:08:03.337 Deallocated Guard Field: 0xFFFF 00:08:03.337 Flush: Supported 00:08:03.337 Reservation: Not Supported 00:08:03.337 Namespace Sharing Capabilities: Private 00:08:03.337 Size (in LBAs): 1048576 (4GiB) 00:08:03.337 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.337 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.337 Thin Provisioning: Not Supported 00:08:03.337 Per-NS Atomic Units: No 00:08:03.337 Maximum Single Source Range Length: 128 00:08:03.337 Maximum Copy Length: 128 00:08:03.337 Maximum Source Range Count: 128 00:08:03.337 NGUID/EUI64 Never Reused: No 00:08:03.337 Namespace Write Protected: No 00:08:03.337 Number of LBA Formats: 8 00:08:03.337 Current LBA Format: LBA Format #04 00:08:03.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.337 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.337 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.337 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.337 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.337 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.337 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.337 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.337 00:08:03.337 NVM Specific Namespace Data 00:08:03.337 =========================== 00:08:03.337 Logical Block Storage Tag Mask: 0 00:08:03.337 Protection Information Capabilities: 00:08:03.337 16b Guard Protection Information Storage Tag Support: No 00:08:03.337 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.337 Storage Tag Check Read Support: No 00:08:03.337 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Namespace ID:2 00:08:03.337 Error Recovery Timeout: Unlimited 00:08:03.337 Command Set Identifier: NVM (00h) 00:08:03.337 Deallocate: Supported 00:08:03.337 Deallocated/Unwritten Error: Supported 00:08:03.337 Deallocated Read Value: All 0x00 00:08:03.337 Deallocate in Write Zeroes: Not Supported 00:08:03.337 Deallocated Guard Field: 0xFFFF 00:08:03.337 Flush: Supported 00:08:03.337 Reservation: Not Supported 00:08:03.337 Namespace Sharing Capabilities: Private 00:08:03.337 Size (in LBAs): 1048576 (4GiB) 00:08:03.337 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.337 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.337 Thin Provisioning: Not Supported 00:08:03.337 Per-NS Atomic Units: No 00:08:03.337 Maximum Single Source Range Length: 128 00:08:03.337 Maximum Copy Length: 128 00:08:03.337 Maximum Source Range Count: 128 00:08:03.337 NGUID/EUI64 Never Reused: No 00:08:03.337 Namespace Write Protected: No 00:08:03.337 Number of LBA Formats: 8 00:08:03.337 Current LBA Format: LBA Format #04 00:08:03.337 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.337 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.337 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.337 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.337 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.337 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.337 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.337 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.337 00:08:03.337 NVM Specific Namespace Data 00:08:03.337 =========================== 00:08:03.337 Logical Block Storage Tag Mask: 0 00:08:03.337 Protection Information Capabilities: 00:08:03.337 16b Guard Protection Information Storage Tag Support: No 00:08:03.337 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.337 Storage Tag Check Read Support: No 00:08:03.337 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.337 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Namespace ID:3 00:08:03.338 Error Recovery Timeout: Unlimited 00:08:03.338 Command Set Identifier: NVM (00h) 00:08:03.338 Deallocate: Supported 00:08:03.338 Deallocated/Unwritten Error: Supported 00:08:03.338 Deallocated Read Value: All 0x00 00:08:03.338 Deallocate in Write Zeroes: Not Supported 00:08:03.338 Deallocated Guard Field: 0xFFFF 00:08:03.338 Flush: Supported 00:08:03.338 Reservation: Not Supported 00:08:03.338 Namespace Sharing Capabilities: Private 00:08:03.338 Size (in LBAs): 1048576 (4GiB) 00:08:03.338 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.338 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.338 Thin Provisioning: Not Supported 00:08:03.338 Per-NS Atomic Units: No 00:08:03.338 Maximum Single Source Range Length: 128 00:08:03.338 Maximum Copy Length: 128 00:08:03.338 Maximum Source Range Count: 128 00:08:03.338 NGUID/EUI64 Never Reused: No 00:08:03.338 Namespace Write Protected: No 00:08:03.338 Number of LBA Formats: 8 00:08:03.338 Current LBA Format: LBA Format #04 00:08:03.338 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.338 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.338 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.338 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.338 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.338 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.338 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.338 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.338 00:08:03.338 NVM Specific Namespace Data 00:08:03.338 =========================== 00:08:03.338 Logical Block Storage Tag Mask: 0 00:08:03.338 Protection Information Capabilities: 00:08:03.338 16b Guard Protection Information Storage Tag Support: No 00:08:03.338 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.338 Storage Tag Check Read Support: No 00:08:03.338 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.338 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.338 10:08:08 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:03.632 ===================================================== 00:08:03.632 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:03.632 ===================================================== 00:08:03.632 Controller Capabilities/Features 00:08:03.632 ================================ 00:08:03.632 Vendor ID: 1b36 00:08:03.632 Subsystem Vendor ID: 1af4 00:08:03.632 Serial Number: 12340 00:08:03.632 Model Number: QEMU NVMe Ctrl 00:08:03.632 Firmware Version: 8.0.0 00:08:03.632 Recommended Arb Burst: 6 00:08:03.632 IEEE OUI Identifier: 00 54 52 00:08:03.632 Multi-path I/O 00:08:03.632 May have multiple subsystem ports: No 00:08:03.632 May have multiple controllers: No 00:08:03.632 Associated with SR-IOV VF: No 00:08:03.632 Max Data Transfer Size: 524288 00:08:03.632 Max Number of Namespaces: 256 00:08:03.632 Max Number of I/O Queues: 64 00:08:03.632 NVMe Specification Version (VS): 1.4 00:08:03.632 NVMe Specification Version (Identify): 1.4 00:08:03.632 Maximum Queue Entries: 2048 00:08:03.632 Contiguous Queues Required: Yes 00:08:03.632 Arbitration Mechanisms Supported 00:08:03.632 Weighted Round Robin: Not Supported 00:08:03.632 Vendor Specific: Not Supported 00:08:03.632 Reset Timeout: 7500 ms 00:08:03.632 Doorbell Stride: 4 bytes 00:08:03.632 NVM Subsystem Reset: Not Supported 00:08:03.632 Command Sets Supported 00:08:03.632 NVM Command Set: Supported 00:08:03.632 Boot Partition: Not Supported 00:08:03.632 Memory Page Size Minimum: 4096 bytes 00:08:03.632 Memory Page Size Maximum: 65536 bytes 00:08:03.632 Persistent Memory Region: Not Supported 00:08:03.632 Optional Asynchronous Events Supported 00:08:03.632 Namespace Attribute Notices: Supported 00:08:03.632 Firmware Activation Notices: Not Supported 00:08:03.632 ANA Change Notices: Not Supported 00:08:03.632 PLE Aggregate Log Change Notices: Not Supported 00:08:03.632 LBA Status Info Alert Notices: Not Supported 00:08:03.632 EGE Aggregate Log Change Notices: Not Supported 00:08:03.632 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.632 Zone Descriptor Change Notices: Not Supported 00:08:03.632 Discovery Log Change Notices: Not Supported 00:08:03.632 Controller Attributes 00:08:03.632 128-bit Host Identifier: Not Supported 00:08:03.632 Non-Operational Permissive Mode: Not Supported 00:08:03.632 NVM Sets: Not Supported 00:08:03.632 Read Recovery Levels: Not Supported 00:08:03.632 Endurance Groups: Not Supported 00:08:03.632 Predictable Latency Mode: Not Supported 00:08:03.632 Traffic Based Keep ALive: Not Supported 00:08:03.632 Namespace Granularity: Not Supported 00:08:03.632 SQ Associations: Not Supported 00:08:03.632 UUID List: Not Supported 00:08:03.632 Multi-Domain Subsystem: Not Supported 00:08:03.632 Fixed Capacity Management: Not Supported 00:08:03.632 Variable Capacity Management: Not Supported 00:08:03.632 Delete Endurance Group: Not Supported 00:08:03.632 Delete NVM Set: Not Supported 00:08:03.632 Extended LBA Formats Supported: Supported 00:08:03.632 Flexible Data Placement Supported: Not Supported 00:08:03.632 00:08:03.632 Controller Memory Buffer Support 00:08:03.632 ================================ 00:08:03.632 Supported: No 00:08:03.632 00:08:03.632 Persistent Memory Region Support 00:08:03.632 ================================ 00:08:03.632 Supported: No 00:08:03.632 00:08:03.632 Admin Command Set Attributes 00:08:03.632 ============================ 00:08:03.632 Security Send/Receive: Not Supported 00:08:03.632 Format NVM: Supported 00:08:03.632 Firmware Activate/Download: Not Supported 00:08:03.632 Namespace Management: Supported 00:08:03.632 Device Self-Test: Not Supported 00:08:03.632 Directives: Supported 00:08:03.632 NVMe-MI: Not Supported 00:08:03.632 Virtualization Management: Not Supported 00:08:03.632 Doorbell Buffer Config: Supported 00:08:03.632 Get LBA Status Capability: Not Supported 00:08:03.632 Command & Feature Lockdown Capability: Not Supported 00:08:03.632 Abort Command Limit: 4 00:08:03.632 Async Event Request Limit: 4 00:08:03.632 Number of Firmware Slots: N/A 00:08:03.632 Firmware Slot 1 Read-Only: N/A 00:08:03.632 Firmware Activation Without Reset: N/A 00:08:03.632 Multiple Update Detection Support: N/A 00:08:03.632 Firmware Update Granularity: No Information Provided 00:08:03.632 Per-Namespace SMART Log: Yes 00:08:03.632 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.632 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:03.632 Command Effects Log Page: Supported 00:08:03.632 Get Log Page Extended Data: Supported 00:08:03.632 Telemetry Log Pages: Not Supported 00:08:03.632 Persistent Event Log Pages: Not Supported 00:08:03.632 Supported Log Pages Log Page: May Support 00:08:03.632 Commands Supported & Effects Log Page: Not Supported 00:08:03.632 Feature Identifiers & Effects Log Page:May Support 00:08:03.632 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.632 Data Area 4 for Telemetry Log: Not Supported 00:08:03.632 Error Log Page Entries Supported: 1 00:08:03.632 Keep Alive: Not Supported 00:08:03.632 00:08:03.632 NVM Command Set Attributes 00:08:03.632 ========================== 00:08:03.632 Submission Queue Entry Size 00:08:03.632 Max: 64 00:08:03.632 Min: 64 00:08:03.632 Completion Queue Entry Size 00:08:03.632 Max: 16 00:08:03.632 Min: 16 00:08:03.632 Number of Namespaces: 256 00:08:03.632 Compare Command: Supported 00:08:03.632 Write Uncorrectable Command: Not Supported 00:08:03.632 Dataset Management Command: Supported 00:08:03.632 Write Zeroes Command: Supported 00:08:03.632 Set Features Save Field: Supported 00:08:03.632 Reservations: Not Supported 00:08:03.632 Timestamp: Supported 00:08:03.632 Copy: Supported 00:08:03.632 Volatile Write Cache: Present 00:08:03.632 Atomic Write Unit (Normal): 1 00:08:03.632 Atomic Write Unit (PFail): 1 00:08:03.632 Atomic Compare & Write Unit: 1 00:08:03.632 Fused Compare & Write: Not Supported 00:08:03.632 Scatter-Gather List 00:08:03.632 SGL Command Set: Supported 00:08:03.633 SGL Keyed: Not Supported 00:08:03.633 SGL Bit Bucket Descriptor: Not Supported 00:08:03.633 SGL Metadata Pointer: Not Supported 00:08:03.633 Oversized SGL: Not Supported 00:08:03.633 SGL Metadata Address: Not Supported 00:08:03.633 SGL Offset: Not Supported 00:08:03.633 Transport SGL Data Block: Not Supported 00:08:03.633 Replay Protected Memory Block: Not Supported 00:08:03.633 00:08:03.633 Firmware Slot Information 00:08:03.633 ========================= 00:08:03.633 Active slot: 1 00:08:03.633 Slot 1 Firmware Revision: 1.0 00:08:03.633 00:08:03.633 00:08:03.633 Commands Supported and Effects 00:08:03.633 ============================== 00:08:03.633 Admin Commands 00:08:03.633 -------------- 00:08:03.633 Delete I/O Submission Queue (00h): Supported 00:08:03.633 Create I/O Submission Queue (01h): Supported 00:08:03.633 Get Log Page (02h): Supported 00:08:03.633 Delete I/O Completion Queue (04h): Supported 00:08:03.633 Create I/O Completion Queue (05h): Supported 00:08:03.633 Identify (06h): Supported 00:08:03.633 Abort (08h): Supported 00:08:03.633 Set Features (09h): Supported 00:08:03.633 Get Features (0Ah): Supported 00:08:03.633 Asynchronous Event Request (0Ch): Supported 00:08:03.633 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.633 Directive Send (19h): Supported 00:08:03.633 Directive Receive (1Ah): Supported 00:08:03.633 Virtualization Management (1Ch): Supported 00:08:03.633 Doorbell Buffer Config (7Ch): Supported 00:08:03.633 Format NVM (80h): Supported LBA-Change 00:08:03.633 I/O Commands 00:08:03.633 ------------ 00:08:03.633 Flush (00h): Supported LBA-Change 00:08:03.633 Write (01h): Supported LBA-Change 00:08:03.633 Read (02h): Supported 00:08:03.633 Compare (05h): Supported 00:08:03.633 Write Zeroes (08h): Supported LBA-Change 00:08:03.633 Dataset Management (09h): Supported LBA-Change 00:08:03.633 Unknown (0Ch): Supported 00:08:03.633 Unknown (12h): Supported 00:08:03.633 Copy (19h): Supported LBA-Change 00:08:03.633 Unknown (1Dh): Supported LBA-Change 00:08:03.633 00:08:03.633 Error Log 00:08:03.633 ========= 00:08:03.633 00:08:03.633 Arbitration 00:08:03.633 =========== 00:08:03.633 Arbitration Burst: no limit 00:08:03.633 00:08:03.633 Power Management 00:08:03.633 ================ 00:08:03.633 Number of Power States: 1 00:08:03.633 Current Power State: Power State #0 00:08:03.633 Power State #0: 00:08:03.633 Max Power: 25.00 W 00:08:03.633 Non-Operational State: Operational 00:08:03.633 Entry Latency: 16 microseconds 00:08:03.633 Exit Latency: 4 microseconds 00:08:03.633 Relative Read Throughput: 0 00:08:03.633 Relative Read Latency: 0 00:08:03.633 Relative Write Throughput: 0 00:08:03.633 Relative Write Latency: 0 00:08:03.633 Idle Power: Not Reported 00:08:03.633 Active Power: Not Reported 00:08:03.633 Non-Operational Permissive Mode: Not Supported 00:08:03.633 00:08:03.633 Health Information 00:08:03.633 ================== 00:08:03.633 Critical Warnings: 00:08:03.633 Available Spare Space: OK 00:08:03.633 Temperature: OK 00:08:03.633 Device Reliability: OK 00:08:03.633 Read Only: No 00:08:03.633 Volatile Memory Backup: OK 00:08:03.633 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.633 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.633 Available Spare: 0% 00:08:03.633 Available Spare Threshold: 0% 00:08:03.633 Life Percentage Used: 0% 00:08:03.633 Data Units Read: 665 00:08:03.633 Data Units Written: 593 00:08:03.633 Host Read Commands: 38877 00:08:03.633 Host Write Commands: 38663 00:08:03.633 Controller Busy Time: 0 minutes 00:08:03.633 Power Cycles: 0 00:08:03.633 Power On Hours: 0 hours 00:08:03.633 Unsafe Shutdowns: 0 00:08:03.633 Unrecoverable Media Errors: 0 00:08:03.633 Lifetime Error Log Entries: 0 00:08:03.633 Warning Temperature Time: 0 minutes 00:08:03.633 Critical Temperature Time: 0 minutes 00:08:03.633 00:08:03.633 Number of Queues 00:08:03.633 ================ 00:08:03.633 Number of I/O Submission Queues: 64 00:08:03.633 Number of I/O Completion Queues: 64 00:08:03.633 00:08:03.633 ZNS Specific Controller Data 00:08:03.633 ============================ 00:08:03.633 Zone Append Size Limit: 0 00:08:03.633 00:08:03.633 00:08:03.633 Active Namespaces 00:08:03.633 ================= 00:08:03.633 Namespace ID:1 00:08:03.633 Error Recovery Timeout: Unlimited 00:08:03.633 Command Set Identifier: NVM (00h) 00:08:03.633 Deallocate: Supported 00:08:03.633 Deallocated/Unwritten Error: Supported 00:08:03.633 Deallocated Read Value: All 0x00 00:08:03.633 Deallocate in Write Zeroes: Not Supported 00:08:03.633 Deallocated Guard Field: 0xFFFF 00:08:03.633 Flush: Supported 00:08:03.633 Reservation: Not Supported 00:08:03.633 Metadata Transferred as: Separate Metadata Buffer 00:08:03.633 Namespace Sharing Capabilities: Private 00:08:03.633 Size (in LBAs): 1548666 (5GiB) 00:08:03.633 Capacity (in LBAs): 1548666 (5GiB) 00:08:03.633 Utilization (in LBAs): 1548666 (5GiB) 00:08:03.633 Thin Provisioning: Not Supported 00:08:03.633 Per-NS Atomic Units: No 00:08:03.633 Maximum Single Source Range Length: 128 00:08:03.633 Maximum Copy Length: 128 00:08:03.633 Maximum Source Range Count: 128 00:08:03.633 NGUID/EUI64 Never Reused: No 00:08:03.633 Namespace Write Protected: No 00:08:03.633 Number of LBA Formats: 8 00:08:03.633 Current LBA Format: LBA Format #07 00:08:03.633 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.633 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.633 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.633 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.633 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.633 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.633 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.633 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.633 00:08:03.633 NVM Specific Namespace Data 00:08:03.633 =========================== 00:08:03.633 Logical Block Storage Tag Mask: 0 00:08:03.633 Protection Information Capabilities: 00:08:03.633 16b Guard Protection Information Storage Tag Support: No 00:08:03.633 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.633 Storage Tag Check Read Support: No 00:08:03.633 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.633 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.633 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:03.633 ===================================================== 00:08:03.633 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:03.633 ===================================================== 00:08:03.633 Controller Capabilities/Features 00:08:03.633 ================================ 00:08:03.633 Vendor ID: 1b36 00:08:03.633 Subsystem Vendor ID: 1af4 00:08:03.633 Serial Number: 12341 00:08:03.633 Model Number: QEMU NVMe Ctrl 00:08:03.633 Firmware Version: 8.0.0 00:08:03.633 Recommended Arb Burst: 6 00:08:03.633 IEEE OUI Identifier: 00 54 52 00:08:03.633 Multi-path I/O 00:08:03.633 May have multiple subsystem ports: No 00:08:03.633 May have multiple controllers: No 00:08:03.633 Associated with SR-IOV VF: No 00:08:03.633 Max Data Transfer Size: 524288 00:08:03.633 Max Number of Namespaces: 256 00:08:03.633 Max Number of I/O Queues: 64 00:08:03.633 NVMe Specification Version (VS): 1.4 00:08:03.633 NVMe Specification Version (Identify): 1.4 00:08:03.633 Maximum Queue Entries: 2048 00:08:03.633 Contiguous Queues Required: Yes 00:08:03.633 Arbitration Mechanisms Supported 00:08:03.633 Weighted Round Robin: Not Supported 00:08:03.633 Vendor Specific: Not Supported 00:08:03.633 Reset Timeout: 7500 ms 00:08:03.633 Doorbell Stride: 4 bytes 00:08:03.633 NVM Subsystem Reset: Not Supported 00:08:03.633 Command Sets Supported 00:08:03.633 NVM Command Set: Supported 00:08:03.633 Boot Partition: Not Supported 00:08:03.633 Memory Page Size Minimum: 4096 bytes 00:08:03.633 Memory Page Size Maximum: 65536 bytes 00:08:03.633 Persistent Memory Region: Not Supported 00:08:03.633 Optional Asynchronous Events Supported 00:08:03.633 Namespace Attribute Notices: Supported 00:08:03.633 Firmware Activation Notices: Not Supported 00:08:03.633 ANA Change Notices: Not Supported 00:08:03.633 PLE Aggregate Log Change Notices: Not Supported 00:08:03.633 LBA Status Info Alert Notices: Not Supported 00:08:03.634 EGE Aggregate Log Change Notices: Not Supported 00:08:03.634 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.634 Zone Descriptor Change Notices: Not Supported 00:08:03.634 Discovery Log Change Notices: Not Supported 00:08:03.634 Controller Attributes 00:08:03.634 128-bit Host Identifier: Not Supported 00:08:03.634 Non-Operational Permissive Mode: Not Supported 00:08:03.634 NVM Sets: Not Supported 00:08:03.634 Read Recovery Levels: Not Supported 00:08:03.634 Endurance Groups: Not Supported 00:08:03.634 Predictable Latency Mode: Not Supported 00:08:03.634 Traffic Based Keep ALive: Not Supported 00:08:03.634 Namespace Granularity: Not Supported 00:08:03.634 SQ Associations: Not Supported 00:08:03.634 UUID List: Not Supported 00:08:03.634 Multi-Domain Subsystem: Not Supported 00:08:03.634 Fixed Capacity Management: Not Supported 00:08:03.634 Variable Capacity Management: Not Supported 00:08:03.634 Delete Endurance Group: Not Supported 00:08:03.634 Delete NVM Set: Not Supported 00:08:03.634 Extended LBA Formats Supported: Supported 00:08:03.634 Flexible Data Placement Supported: Not Supported 00:08:03.634 00:08:03.634 Controller Memory Buffer Support 00:08:03.634 ================================ 00:08:03.634 Supported: No 00:08:03.634 00:08:03.634 Persistent Memory Region Support 00:08:03.634 ================================ 00:08:03.634 Supported: No 00:08:03.634 00:08:03.634 Admin Command Set Attributes 00:08:03.634 ============================ 00:08:03.634 Security Send/Receive: Not Supported 00:08:03.634 Format NVM: Supported 00:08:03.634 Firmware Activate/Download: Not Supported 00:08:03.634 Namespace Management: Supported 00:08:03.634 Device Self-Test: Not Supported 00:08:03.634 Directives: Supported 00:08:03.634 NVMe-MI: Not Supported 00:08:03.634 Virtualization Management: Not Supported 00:08:03.634 Doorbell Buffer Config: Supported 00:08:03.634 Get LBA Status Capability: Not Supported 00:08:03.634 Command & Feature Lockdown Capability: Not Supported 00:08:03.634 Abort Command Limit: 4 00:08:03.634 Async Event Request Limit: 4 00:08:03.634 Number of Firmware Slots: N/A 00:08:03.634 Firmware Slot 1 Read-Only: N/A 00:08:03.634 Firmware Activation Without Reset: N/A 00:08:03.634 Multiple Update Detection Support: N/A 00:08:03.634 Firmware Update Granularity: No Information Provided 00:08:03.634 Per-Namespace SMART Log: Yes 00:08:03.634 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.634 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:03.634 Command Effects Log Page: Supported 00:08:03.634 Get Log Page Extended Data: Supported 00:08:03.634 Telemetry Log Pages: Not Supported 00:08:03.634 Persistent Event Log Pages: Not Supported 00:08:03.634 Supported Log Pages Log Page: May Support 00:08:03.634 Commands Supported & Effects Log Page: Not Supported 00:08:03.634 Feature Identifiers & Effects Log Page:May Support 00:08:03.634 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.634 Data Area 4 for Telemetry Log: Not Supported 00:08:03.634 Error Log Page Entries Supported: 1 00:08:03.634 Keep Alive: Not Supported 00:08:03.634 00:08:03.634 NVM Command Set Attributes 00:08:03.634 ========================== 00:08:03.634 Submission Queue Entry Size 00:08:03.634 Max: 64 00:08:03.634 Min: 64 00:08:03.634 Completion Queue Entry Size 00:08:03.634 Max: 16 00:08:03.634 Min: 16 00:08:03.634 Number of Namespaces: 256 00:08:03.634 Compare Command: Supported 00:08:03.634 Write Uncorrectable Command: Not Supported 00:08:03.634 Dataset Management Command: Supported 00:08:03.634 Write Zeroes Command: Supported 00:08:03.634 Set Features Save Field: Supported 00:08:03.634 Reservations: Not Supported 00:08:03.634 Timestamp: Supported 00:08:03.634 Copy: Supported 00:08:03.634 Volatile Write Cache: Present 00:08:03.634 Atomic Write Unit (Normal): 1 00:08:03.634 Atomic Write Unit (PFail): 1 00:08:03.634 Atomic Compare & Write Unit: 1 00:08:03.634 Fused Compare & Write: Not Supported 00:08:03.634 Scatter-Gather List 00:08:03.634 SGL Command Set: Supported 00:08:03.634 SGL Keyed: Not Supported 00:08:03.634 SGL Bit Bucket Descriptor: Not Supported 00:08:03.634 SGL Metadata Pointer: Not Supported 00:08:03.634 Oversized SGL: Not Supported 00:08:03.634 SGL Metadata Address: Not Supported 00:08:03.634 SGL Offset: Not Supported 00:08:03.634 Transport SGL Data Block: Not Supported 00:08:03.634 Replay Protected Memory Block: Not Supported 00:08:03.634 00:08:03.634 Firmware Slot Information 00:08:03.634 ========================= 00:08:03.634 Active slot: 1 00:08:03.634 Slot 1 Firmware Revision: 1.0 00:08:03.634 00:08:03.634 00:08:03.634 Commands Supported and Effects 00:08:03.634 ============================== 00:08:03.634 Admin Commands 00:08:03.634 -------------- 00:08:03.634 Delete I/O Submission Queue (00h): Supported 00:08:03.634 Create I/O Submission Queue (01h): Supported 00:08:03.634 Get Log Page (02h): Supported 00:08:03.634 Delete I/O Completion Queue (04h): Supported 00:08:03.634 Create I/O Completion Queue (05h): Supported 00:08:03.634 Identify (06h): Supported 00:08:03.634 Abort (08h): Supported 00:08:03.634 Set Features (09h): Supported 00:08:03.634 Get Features (0Ah): Supported 00:08:03.634 Asynchronous Event Request (0Ch): Supported 00:08:03.634 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.634 Directive Send (19h): Supported 00:08:03.634 Directive Receive (1Ah): Supported 00:08:03.634 Virtualization Management (1Ch): Supported 00:08:03.634 Doorbell Buffer Config (7Ch): Supported 00:08:03.634 Format NVM (80h): Supported LBA-Change 00:08:03.634 I/O Commands 00:08:03.634 ------------ 00:08:03.634 Flush (00h): Supported LBA-Change 00:08:03.634 Write (01h): Supported LBA-Change 00:08:03.634 Read (02h): Supported 00:08:03.634 Compare (05h): Supported 00:08:03.634 Write Zeroes (08h): Supported LBA-Change 00:08:03.634 Dataset Management (09h): Supported LBA-Change 00:08:03.634 Unknown (0Ch): Supported 00:08:03.634 Unknown (12h): Supported 00:08:03.634 Copy (19h): Supported LBA-Change 00:08:03.634 Unknown (1Dh): Supported LBA-Change 00:08:03.634 00:08:03.634 Error Log 00:08:03.634 ========= 00:08:03.634 00:08:03.634 Arbitration 00:08:03.634 =========== 00:08:03.634 Arbitration Burst: no limit 00:08:03.634 00:08:03.634 Power Management 00:08:03.634 ================ 00:08:03.634 Number of Power States: 1 00:08:03.634 Current Power State: Power State #0 00:08:03.634 Power State #0: 00:08:03.634 Max Power: 25.00 W 00:08:03.634 Non-Operational State: Operational 00:08:03.634 Entry Latency: 16 microseconds 00:08:03.634 Exit Latency: 4 microseconds 00:08:03.634 Relative Read Throughput: 0 00:08:03.634 Relative Read Latency: 0 00:08:03.634 Relative Write Throughput: 0 00:08:03.634 Relative Write Latency: 0 00:08:03.634 Idle Power: Not Reported 00:08:03.634 Active Power: Not Reported 00:08:03.634 Non-Operational Permissive Mode: Not Supported 00:08:03.634 00:08:03.634 Health Information 00:08:03.634 ================== 00:08:03.634 Critical Warnings: 00:08:03.634 Available Spare Space: OK 00:08:03.634 Temperature: OK 00:08:03.634 Device Reliability: OK 00:08:03.634 Read Only: No 00:08:03.634 Volatile Memory Backup: OK 00:08:03.634 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.634 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.634 Available Spare: 0% 00:08:03.634 Available Spare Threshold: 0% 00:08:03.634 Life Percentage Used: 0% 00:08:03.634 Data Units Read: 1003 00:08:03.634 Data Units Written: 868 00:08:03.634 Host Read Commands: 56695 00:08:03.634 Host Write Commands: 55448 00:08:03.634 Controller Busy Time: 0 minutes 00:08:03.634 Power Cycles: 0 00:08:03.634 Power On Hours: 0 hours 00:08:03.634 Unsafe Shutdowns: 0 00:08:03.634 Unrecoverable Media Errors: 0 00:08:03.634 Lifetime Error Log Entries: 0 00:08:03.634 Warning Temperature Time: 0 minutes 00:08:03.634 Critical Temperature Time: 0 minutes 00:08:03.634 00:08:03.634 Number of Queues 00:08:03.634 ================ 00:08:03.634 Number of I/O Submission Queues: 64 00:08:03.634 Number of I/O Completion Queues: 64 00:08:03.634 00:08:03.634 ZNS Specific Controller Data 00:08:03.634 ============================ 00:08:03.634 Zone Append Size Limit: 0 00:08:03.634 00:08:03.634 00:08:03.634 Active Namespaces 00:08:03.634 ================= 00:08:03.634 Namespace ID:1 00:08:03.634 Error Recovery Timeout: Unlimited 00:08:03.634 Command Set Identifier: NVM (00h) 00:08:03.634 Deallocate: Supported 00:08:03.634 Deallocated/Unwritten Error: Supported 00:08:03.634 Deallocated Read Value: All 0x00 00:08:03.634 Deallocate in Write Zeroes: Not Supported 00:08:03.634 Deallocated Guard Field: 0xFFFF 00:08:03.634 Flush: Supported 00:08:03.635 Reservation: Not Supported 00:08:03.635 Namespace Sharing Capabilities: Private 00:08:03.635 Size (in LBAs): 1310720 (5GiB) 00:08:03.635 Capacity (in LBAs): 1310720 (5GiB) 00:08:03.635 Utilization (in LBAs): 1310720 (5GiB) 00:08:03.635 Thin Provisioning: Not Supported 00:08:03.635 Per-NS Atomic Units: No 00:08:03.635 Maximum Single Source Range Length: 128 00:08:03.635 Maximum Copy Length: 128 00:08:03.635 Maximum Source Range Count: 128 00:08:03.635 NGUID/EUI64 Never Reused: No 00:08:03.635 Namespace Write Protected: No 00:08:03.635 Number of LBA Formats: 8 00:08:03.635 Current LBA Format: LBA Format #04 00:08:03.635 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.635 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.635 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.635 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.635 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.635 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.635 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.635 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.635 00:08:03.635 NVM Specific Namespace Data 00:08:03.635 =========================== 00:08:03.635 Logical Block Storage Tag Mask: 0 00:08:03.635 Protection Information Capabilities: 00:08:03.635 16b Guard Protection Information Storage Tag Support: No 00:08:03.635 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.635 Storage Tag Check Read Support: No 00:08:03.635 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.635 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.635 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:03.893 ===================================================== 00:08:03.893 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:03.893 ===================================================== 00:08:03.893 Controller Capabilities/Features 00:08:03.893 ================================ 00:08:03.893 Vendor ID: 1b36 00:08:03.893 Subsystem Vendor ID: 1af4 00:08:03.893 Serial Number: 12342 00:08:03.893 Model Number: QEMU NVMe Ctrl 00:08:03.893 Firmware Version: 8.0.0 00:08:03.893 Recommended Arb Burst: 6 00:08:03.893 IEEE OUI Identifier: 00 54 52 00:08:03.893 Multi-path I/O 00:08:03.893 May have multiple subsystem ports: No 00:08:03.893 May have multiple controllers: No 00:08:03.893 Associated with SR-IOV VF: No 00:08:03.893 Max Data Transfer Size: 524288 00:08:03.893 Max Number of Namespaces: 256 00:08:03.893 Max Number of I/O Queues: 64 00:08:03.893 NVMe Specification Version (VS): 1.4 00:08:03.893 NVMe Specification Version (Identify): 1.4 00:08:03.893 Maximum Queue Entries: 2048 00:08:03.893 Contiguous Queues Required: Yes 00:08:03.893 Arbitration Mechanisms Supported 00:08:03.893 Weighted Round Robin: Not Supported 00:08:03.893 Vendor Specific: Not Supported 00:08:03.893 Reset Timeout: 7500 ms 00:08:03.893 Doorbell Stride: 4 bytes 00:08:03.893 NVM Subsystem Reset: Not Supported 00:08:03.893 Command Sets Supported 00:08:03.893 NVM Command Set: Supported 00:08:03.893 Boot Partition: Not Supported 00:08:03.893 Memory Page Size Minimum: 4096 bytes 00:08:03.893 Memory Page Size Maximum: 65536 bytes 00:08:03.893 Persistent Memory Region: Not Supported 00:08:03.893 Optional Asynchronous Events Supported 00:08:03.893 Namespace Attribute Notices: Supported 00:08:03.893 Firmware Activation Notices: Not Supported 00:08:03.893 ANA Change Notices: Not Supported 00:08:03.893 PLE Aggregate Log Change Notices: Not Supported 00:08:03.893 LBA Status Info Alert Notices: Not Supported 00:08:03.893 EGE Aggregate Log Change Notices: Not Supported 00:08:03.893 Normal NVM Subsystem Shutdown event: Not Supported 00:08:03.893 Zone Descriptor Change Notices: Not Supported 00:08:03.893 Discovery Log Change Notices: Not Supported 00:08:03.893 Controller Attributes 00:08:03.893 128-bit Host Identifier: Not Supported 00:08:03.893 Non-Operational Permissive Mode: Not Supported 00:08:03.893 NVM Sets: Not Supported 00:08:03.893 Read Recovery Levels: Not Supported 00:08:03.893 Endurance Groups: Not Supported 00:08:03.893 Predictable Latency Mode: Not Supported 00:08:03.893 Traffic Based Keep ALive: Not Supported 00:08:03.893 Namespace Granularity: Not Supported 00:08:03.893 SQ Associations: Not Supported 00:08:03.893 UUID List: Not Supported 00:08:03.893 Multi-Domain Subsystem: Not Supported 00:08:03.893 Fixed Capacity Management: Not Supported 00:08:03.893 Variable Capacity Management: Not Supported 00:08:03.893 Delete Endurance Group: Not Supported 00:08:03.893 Delete NVM Set: Not Supported 00:08:03.893 Extended LBA Formats Supported: Supported 00:08:03.893 Flexible Data Placement Supported: Not Supported 00:08:03.893 00:08:03.893 Controller Memory Buffer Support 00:08:03.893 ================================ 00:08:03.893 Supported: No 00:08:03.893 00:08:03.893 Persistent Memory Region Support 00:08:03.893 ================================ 00:08:03.893 Supported: No 00:08:03.893 00:08:03.893 Admin Command Set Attributes 00:08:03.893 ============================ 00:08:03.893 Security Send/Receive: Not Supported 00:08:03.893 Format NVM: Supported 00:08:03.893 Firmware Activate/Download: Not Supported 00:08:03.893 Namespace Management: Supported 00:08:03.893 Device Self-Test: Not Supported 00:08:03.893 Directives: Supported 00:08:03.893 NVMe-MI: Not Supported 00:08:03.893 Virtualization Management: Not Supported 00:08:03.893 Doorbell Buffer Config: Supported 00:08:03.893 Get LBA Status Capability: Not Supported 00:08:03.893 Command & Feature Lockdown Capability: Not Supported 00:08:03.893 Abort Command Limit: 4 00:08:03.893 Async Event Request Limit: 4 00:08:03.893 Number of Firmware Slots: N/A 00:08:03.893 Firmware Slot 1 Read-Only: N/A 00:08:03.893 Firmware Activation Without Reset: N/A 00:08:03.893 Multiple Update Detection Support: N/A 00:08:03.893 Firmware Update Granularity: No Information Provided 00:08:03.893 Per-Namespace SMART Log: Yes 00:08:03.893 Asymmetric Namespace Access Log Page: Not Supported 00:08:03.894 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:03.894 Command Effects Log Page: Supported 00:08:03.894 Get Log Page Extended Data: Supported 00:08:03.894 Telemetry Log Pages: Not Supported 00:08:03.894 Persistent Event Log Pages: Not Supported 00:08:03.894 Supported Log Pages Log Page: May Support 00:08:03.894 Commands Supported & Effects Log Page: Not Supported 00:08:03.894 Feature Identifiers & Effects Log Page:May Support 00:08:03.894 NVMe-MI Commands & Effects Log Page: May Support 00:08:03.894 Data Area 4 for Telemetry Log: Not Supported 00:08:03.894 Error Log Page Entries Supported: 1 00:08:03.894 Keep Alive: Not Supported 00:08:03.894 00:08:03.894 NVM Command Set Attributes 00:08:03.894 ========================== 00:08:03.894 Submission Queue Entry Size 00:08:03.894 Max: 64 00:08:03.894 Min: 64 00:08:03.894 Completion Queue Entry Size 00:08:03.894 Max: 16 00:08:03.894 Min: 16 00:08:03.894 Number of Namespaces: 256 00:08:03.894 Compare Command: Supported 00:08:03.894 Write Uncorrectable Command: Not Supported 00:08:03.894 Dataset Management Command: Supported 00:08:03.894 Write Zeroes Command: Supported 00:08:03.894 Set Features Save Field: Supported 00:08:03.894 Reservations: Not Supported 00:08:03.894 Timestamp: Supported 00:08:03.894 Copy: Supported 00:08:03.894 Volatile Write Cache: Present 00:08:03.894 Atomic Write Unit (Normal): 1 00:08:03.894 Atomic Write Unit (PFail): 1 00:08:03.894 Atomic Compare & Write Unit: 1 00:08:03.894 Fused Compare & Write: Not Supported 00:08:03.894 Scatter-Gather List 00:08:03.894 SGL Command Set: Supported 00:08:03.894 SGL Keyed: Not Supported 00:08:03.894 SGL Bit Bucket Descriptor: Not Supported 00:08:03.894 SGL Metadata Pointer: Not Supported 00:08:03.894 Oversized SGL: Not Supported 00:08:03.894 SGL Metadata Address: Not Supported 00:08:03.894 SGL Offset: Not Supported 00:08:03.894 Transport SGL Data Block: Not Supported 00:08:03.894 Replay Protected Memory Block: Not Supported 00:08:03.894 00:08:03.894 Firmware Slot Information 00:08:03.894 ========================= 00:08:03.894 Active slot: 1 00:08:03.894 Slot 1 Firmware Revision: 1.0 00:08:03.894 00:08:03.894 00:08:03.894 Commands Supported and Effects 00:08:03.894 ============================== 00:08:03.894 Admin Commands 00:08:03.894 -------------- 00:08:03.894 Delete I/O Submission Queue (00h): Supported 00:08:03.894 Create I/O Submission Queue (01h): Supported 00:08:03.894 Get Log Page (02h): Supported 00:08:03.894 Delete I/O Completion Queue (04h): Supported 00:08:03.894 Create I/O Completion Queue (05h): Supported 00:08:03.894 Identify (06h): Supported 00:08:03.894 Abort (08h): Supported 00:08:03.894 Set Features (09h): Supported 00:08:03.894 Get Features (0Ah): Supported 00:08:03.894 Asynchronous Event Request (0Ch): Supported 00:08:03.894 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:03.894 Directive Send (19h): Supported 00:08:03.894 Directive Receive (1Ah): Supported 00:08:03.894 Virtualization Management (1Ch): Supported 00:08:03.894 Doorbell Buffer Config (7Ch): Supported 00:08:03.894 Format NVM (80h): Supported LBA-Change 00:08:03.894 I/O Commands 00:08:03.894 ------------ 00:08:03.894 Flush (00h): Supported LBA-Change 00:08:03.894 Write (01h): Supported LBA-Change 00:08:03.894 Read (02h): Supported 00:08:03.894 Compare (05h): Supported 00:08:03.894 Write Zeroes (08h): Supported LBA-Change 00:08:03.894 Dataset Management (09h): Supported LBA-Change 00:08:03.894 Unknown (0Ch): Supported 00:08:03.894 Unknown (12h): Supported 00:08:03.894 Copy (19h): Supported LBA-Change 00:08:03.894 Unknown (1Dh): Supported LBA-Change 00:08:03.894 00:08:03.894 Error Log 00:08:03.894 ========= 00:08:03.894 00:08:03.894 Arbitration 00:08:03.894 =========== 00:08:03.894 Arbitration Burst: no limit 00:08:03.894 00:08:03.894 Power Management 00:08:03.894 ================ 00:08:03.894 Number of Power States: 1 00:08:03.894 Current Power State: Power State #0 00:08:03.894 Power State #0: 00:08:03.894 Max Power: 25.00 W 00:08:03.894 Non-Operational State: Operational 00:08:03.894 Entry Latency: 16 microseconds 00:08:03.894 Exit Latency: 4 microseconds 00:08:03.894 Relative Read Throughput: 0 00:08:03.894 Relative Read Latency: 0 00:08:03.894 Relative Write Throughput: 0 00:08:03.894 Relative Write Latency: 0 00:08:03.894 Idle Power: Not Reported 00:08:03.894 Active Power: Not Reported 00:08:03.894 Non-Operational Permissive Mode: Not Supported 00:08:03.894 00:08:03.894 Health Information 00:08:03.894 ================== 00:08:03.894 Critical Warnings: 00:08:03.894 Available Spare Space: OK 00:08:03.894 Temperature: OK 00:08:03.894 Device Reliability: OK 00:08:03.894 Read Only: No 00:08:03.894 Volatile Memory Backup: OK 00:08:03.894 Current Temperature: 323 Kelvin (50 Celsius) 00:08:03.894 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:03.894 Available Spare: 0% 00:08:03.894 Available Spare Threshold: 0% 00:08:03.894 Life Percentage Used: 0% 00:08:03.894 Data Units Read: 2117 00:08:03.894 Data Units Written: 1905 00:08:03.894 Host Read Commands: 118543 00:08:03.894 Host Write Commands: 116812 00:08:03.894 Controller Busy Time: 0 minutes 00:08:03.894 Power Cycles: 0 00:08:03.894 Power On Hours: 0 hours 00:08:03.894 Unsafe Shutdowns: 0 00:08:03.894 Unrecoverable Media Errors: 0 00:08:03.894 Lifetime Error Log Entries: 0 00:08:03.894 Warning Temperature Time: 0 minutes 00:08:03.894 Critical Temperature Time: 0 minutes 00:08:03.894 00:08:03.894 Number of Queues 00:08:03.894 ================ 00:08:03.894 Number of I/O Submission Queues: 64 00:08:03.894 Number of I/O Completion Queues: 64 00:08:03.894 00:08:03.894 ZNS Specific Controller Data 00:08:03.894 ============================ 00:08:03.894 Zone Append Size Limit: 0 00:08:03.894 00:08:03.894 00:08:03.894 Active Namespaces 00:08:03.894 ================= 00:08:03.894 Namespace ID:1 00:08:03.894 Error Recovery Timeout: Unlimited 00:08:03.894 Command Set Identifier: NVM (00h) 00:08:03.894 Deallocate: Supported 00:08:03.894 Deallocated/Unwritten Error: Supported 00:08:03.894 Deallocated Read Value: All 0x00 00:08:03.894 Deallocate in Write Zeroes: Not Supported 00:08:03.894 Deallocated Guard Field: 0xFFFF 00:08:03.894 Flush: Supported 00:08:03.894 Reservation: Not Supported 00:08:03.894 Namespace Sharing Capabilities: Private 00:08:03.894 Size (in LBAs): 1048576 (4GiB) 00:08:03.894 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.894 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.894 Thin Provisioning: Not Supported 00:08:03.894 Per-NS Atomic Units: No 00:08:03.894 Maximum Single Source Range Length: 128 00:08:03.894 Maximum Copy Length: 128 00:08:03.894 Maximum Source Range Count: 128 00:08:03.894 NGUID/EUI64 Never Reused: No 00:08:03.894 Namespace Write Protected: No 00:08:03.894 Number of LBA Formats: 8 00:08:03.894 Current LBA Format: LBA Format #04 00:08:03.894 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.894 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.894 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.894 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.894 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.894 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.894 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.894 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.894 00:08:03.894 NVM Specific Namespace Data 00:08:03.894 =========================== 00:08:03.894 Logical Block Storage Tag Mask: 0 00:08:03.894 Protection Information Capabilities: 00:08:03.894 16b Guard Protection Information Storage Tag Support: No 00:08:03.894 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.894 Storage Tag Check Read Support: No 00:08:03.894 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.894 Namespace ID:2 00:08:03.894 Error Recovery Timeout: Unlimited 00:08:03.894 Command Set Identifier: NVM (00h) 00:08:03.894 Deallocate: Supported 00:08:03.894 Deallocated/Unwritten Error: Supported 00:08:03.894 Deallocated Read Value: All 0x00 00:08:03.894 Deallocate in Write Zeroes: Not Supported 00:08:03.894 Deallocated Guard Field: 0xFFFF 00:08:03.894 Flush: Supported 00:08:03.894 Reservation: Not Supported 00:08:03.894 Namespace Sharing Capabilities: Private 00:08:03.895 Size (in LBAs): 1048576 (4GiB) 00:08:03.895 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.895 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.895 Thin Provisioning: Not Supported 00:08:03.895 Per-NS Atomic Units: No 00:08:03.895 Maximum Single Source Range Length: 128 00:08:03.895 Maximum Copy Length: 128 00:08:03.895 Maximum Source Range Count: 128 00:08:03.895 NGUID/EUI64 Never Reused: No 00:08:03.895 Namespace Write Protected: No 00:08:03.895 Number of LBA Formats: 8 00:08:03.895 Current LBA Format: LBA Format #04 00:08:03.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.895 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.895 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.895 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.895 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.895 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.895 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.895 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.895 00:08:03.895 NVM Specific Namespace Data 00:08:03.895 =========================== 00:08:03.895 Logical Block Storage Tag Mask: 0 00:08:03.895 Protection Information Capabilities: 00:08:03.895 16b Guard Protection Information Storage Tag Support: No 00:08:03.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.895 Storage Tag Check Read Support: No 00:08:03.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Namespace ID:3 00:08:03.895 Error Recovery Timeout: Unlimited 00:08:03.895 Command Set Identifier: NVM (00h) 00:08:03.895 Deallocate: Supported 00:08:03.895 Deallocated/Unwritten Error: Supported 00:08:03.895 Deallocated Read Value: All 0x00 00:08:03.895 Deallocate in Write Zeroes: Not Supported 00:08:03.895 Deallocated Guard Field: 0xFFFF 00:08:03.895 Flush: Supported 00:08:03.895 Reservation: Not Supported 00:08:03.895 Namespace Sharing Capabilities: Private 00:08:03.895 Size (in LBAs): 1048576 (4GiB) 00:08:03.895 Capacity (in LBAs): 1048576 (4GiB) 00:08:03.895 Utilization (in LBAs): 1048576 (4GiB) 00:08:03.895 Thin Provisioning: Not Supported 00:08:03.895 Per-NS Atomic Units: No 00:08:03.895 Maximum Single Source Range Length: 128 00:08:03.895 Maximum Copy Length: 128 00:08:03.895 Maximum Source Range Count: 128 00:08:03.895 NGUID/EUI64 Never Reused: No 00:08:03.895 Namespace Write Protected: No 00:08:03.895 Number of LBA Formats: 8 00:08:03.895 Current LBA Format: LBA Format #04 00:08:03.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:03.895 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:03.895 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:03.895 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:03.895 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:03.895 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:03.895 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:03.895 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:03.895 00:08:03.895 NVM Specific Namespace Data 00:08:03.895 =========================== 00:08:03.895 Logical Block Storage Tag Mask: 0 00:08:03.895 Protection Information Capabilities: 00:08:03.895 16b Guard Protection Information Storage Tag Support: No 00:08:03.895 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:03.895 Storage Tag Check Read Support: No 00:08:03.895 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:03.895 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:03.895 10:08:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:04.153 ===================================================== 00:08:04.153 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:04.153 ===================================================== 00:08:04.153 Controller Capabilities/Features 00:08:04.153 ================================ 00:08:04.153 Vendor ID: 1b36 00:08:04.153 Subsystem Vendor ID: 1af4 00:08:04.153 Serial Number: 12343 00:08:04.153 Model Number: QEMU NVMe Ctrl 00:08:04.153 Firmware Version: 8.0.0 00:08:04.153 Recommended Arb Burst: 6 00:08:04.153 IEEE OUI Identifier: 00 54 52 00:08:04.153 Multi-path I/O 00:08:04.153 May have multiple subsystem ports: No 00:08:04.153 May have multiple controllers: Yes 00:08:04.154 Associated with SR-IOV VF: No 00:08:04.154 Max Data Transfer Size: 524288 00:08:04.154 Max Number of Namespaces: 256 00:08:04.154 Max Number of I/O Queues: 64 00:08:04.154 NVMe Specification Version (VS): 1.4 00:08:04.154 NVMe Specification Version (Identify): 1.4 00:08:04.154 Maximum Queue Entries: 2048 00:08:04.154 Contiguous Queues Required: Yes 00:08:04.154 Arbitration Mechanisms Supported 00:08:04.154 Weighted Round Robin: Not Supported 00:08:04.154 Vendor Specific: Not Supported 00:08:04.154 Reset Timeout: 7500 ms 00:08:04.154 Doorbell Stride: 4 bytes 00:08:04.154 NVM Subsystem Reset: Not Supported 00:08:04.154 Command Sets Supported 00:08:04.154 NVM Command Set: Supported 00:08:04.154 Boot Partition: Not Supported 00:08:04.154 Memory Page Size Minimum: 4096 bytes 00:08:04.154 Memory Page Size Maximum: 65536 bytes 00:08:04.154 Persistent Memory Region: Not Supported 00:08:04.154 Optional Asynchronous Events Supported 00:08:04.154 Namespace Attribute Notices: Supported 00:08:04.154 Firmware Activation Notices: Not Supported 00:08:04.154 ANA Change Notices: Not Supported 00:08:04.154 PLE Aggregate Log Change Notices: Not Supported 00:08:04.154 LBA Status Info Alert Notices: Not Supported 00:08:04.154 EGE Aggregate Log Change Notices: Not Supported 00:08:04.154 Normal NVM Subsystem Shutdown event: Not Supported 00:08:04.154 Zone Descriptor Change Notices: Not Supported 00:08:04.154 Discovery Log Change Notices: Not Supported 00:08:04.154 Controller Attributes 00:08:04.154 128-bit Host Identifier: Not Supported 00:08:04.154 Non-Operational Permissive Mode: Not Supported 00:08:04.154 NVM Sets: Not Supported 00:08:04.154 Read Recovery Levels: Not Supported 00:08:04.154 Endurance Groups: Supported 00:08:04.154 Predictable Latency Mode: Not Supported 00:08:04.154 Traffic Based Keep ALive: Not Supported 00:08:04.154 Namespace Granularity: Not Supported 00:08:04.154 SQ Associations: Not Supported 00:08:04.154 UUID List: Not Supported 00:08:04.154 Multi-Domain Subsystem: Not Supported 00:08:04.154 Fixed Capacity Management: Not Supported 00:08:04.154 Variable Capacity Management: Not Supported 00:08:04.154 Delete Endurance Group: Not Supported 00:08:04.154 Delete NVM Set: Not Supported 00:08:04.154 Extended LBA Formats Supported: Supported 00:08:04.154 Flexible Data Placement Supported: Supported 00:08:04.154 00:08:04.154 Controller Memory Buffer Support 00:08:04.154 ================================ 00:08:04.154 Supported: No 00:08:04.154 00:08:04.154 Persistent Memory Region Support 00:08:04.154 ================================ 00:08:04.154 Supported: No 00:08:04.154 00:08:04.154 Admin Command Set Attributes 00:08:04.154 ============================ 00:08:04.154 Security Send/Receive: Not Supported 00:08:04.154 Format NVM: Supported 00:08:04.154 Firmware Activate/Download: Not Supported 00:08:04.154 Namespace Management: Supported 00:08:04.154 Device Self-Test: Not Supported 00:08:04.154 Directives: Supported 00:08:04.154 NVMe-MI: Not Supported 00:08:04.154 Virtualization Management: Not Supported 00:08:04.154 Doorbell Buffer Config: Supported 00:08:04.154 Get LBA Status Capability: Not Supported 00:08:04.154 Command & Feature Lockdown Capability: Not Supported 00:08:04.154 Abort Command Limit: 4 00:08:04.154 Async Event Request Limit: 4 00:08:04.154 Number of Firmware Slots: N/A 00:08:04.154 Firmware Slot 1 Read-Only: N/A 00:08:04.154 Firmware Activation Without Reset: N/A 00:08:04.154 Multiple Update Detection Support: N/A 00:08:04.154 Firmware Update Granularity: No Information Provided 00:08:04.154 Per-Namespace SMART Log: Yes 00:08:04.154 Asymmetric Namespace Access Log Page: Not Supported 00:08:04.154 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:04.154 Command Effects Log Page: Supported 00:08:04.154 Get Log Page Extended Data: Supported 00:08:04.154 Telemetry Log Pages: Not Supported 00:08:04.154 Persistent Event Log Pages: Not Supported 00:08:04.154 Supported Log Pages Log Page: May Support 00:08:04.154 Commands Supported & Effects Log Page: Not Supported 00:08:04.154 Feature Identifiers & Effects Log Page:May Support 00:08:04.154 NVMe-MI Commands & Effects Log Page: May Support 00:08:04.154 Data Area 4 for Telemetry Log: Not Supported 00:08:04.154 Error Log Page Entries Supported: 1 00:08:04.154 Keep Alive: Not Supported 00:08:04.154 00:08:04.154 NVM Command Set Attributes 00:08:04.154 ========================== 00:08:04.154 Submission Queue Entry Size 00:08:04.154 Max: 64 00:08:04.154 Min: 64 00:08:04.154 Completion Queue Entry Size 00:08:04.154 Max: 16 00:08:04.154 Min: 16 00:08:04.154 Number of Namespaces: 256 00:08:04.154 Compare Command: Supported 00:08:04.154 Write Uncorrectable Command: Not Supported 00:08:04.154 Dataset Management Command: Supported 00:08:04.154 Write Zeroes Command: Supported 00:08:04.154 Set Features Save Field: Supported 00:08:04.154 Reservations: Not Supported 00:08:04.154 Timestamp: Supported 00:08:04.154 Copy: Supported 00:08:04.154 Volatile Write Cache: Present 00:08:04.154 Atomic Write Unit (Normal): 1 00:08:04.154 Atomic Write Unit (PFail): 1 00:08:04.154 Atomic Compare & Write Unit: 1 00:08:04.154 Fused Compare & Write: Not Supported 00:08:04.154 Scatter-Gather List 00:08:04.154 SGL Command Set: Supported 00:08:04.154 SGL Keyed: Not Supported 00:08:04.154 SGL Bit Bucket Descriptor: Not Supported 00:08:04.154 SGL Metadata Pointer: Not Supported 00:08:04.154 Oversized SGL: Not Supported 00:08:04.154 SGL Metadata Address: Not Supported 00:08:04.154 SGL Offset: Not Supported 00:08:04.154 Transport SGL Data Block: Not Supported 00:08:04.154 Replay Protected Memory Block: Not Supported 00:08:04.154 00:08:04.154 Firmware Slot Information 00:08:04.154 ========================= 00:08:04.154 Active slot: 1 00:08:04.154 Slot 1 Firmware Revision: 1.0 00:08:04.154 00:08:04.154 00:08:04.154 Commands Supported and Effects 00:08:04.154 ============================== 00:08:04.154 Admin Commands 00:08:04.154 -------------- 00:08:04.154 Delete I/O Submission Queue (00h): Supported 00:08:04.154 Create I/O Submission Queue (01h): Supported 00:08:04.154 Get Log Page (02h): Supported 00:08:04.154 Delete I/O Completion Queue (04h): Supported 00:08:04.154 Create I/O Completion Queue (05h): Supported 00:08:04.154 Identify (06h): Supported 00:08:04.154 Abort (08h): Supported 00:08:04.154 Set Features (09h): Supported 00:08:04.154 Get Features (0Ah): Supported 00:08:04.154 Asynchronous Event Request (0Ch): Supported 00:08:04.154 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:04.154 Directive Send (19h): Supported 00:08:04.154 Directive Receive (1Ah): Supported 00:08:04.154 Virtualization Management (1Ch): Supported 00:08:04.154 Doorbell Buffer Config (7Ch): Supported 00:08:04.154 Format NVM (80h): Supported LBA-Change 00:08:04.154 I/O Commands 00:08:04.154 ------------ 00:08:04.154 Flush (00h): Supported LBA-Change 00:08:04.154 Write (01h): Supported LBA-Change 00:08:04.154 Read (02h): Supported 00:08:04.154 Compare (05h): Supported 00:08:04.154 Write Zeroes (08h): Supported LBA-Change 00:08:04.154 Dataset Management (09h): Supported LBA-Change 00:08:04.154 Unknown (0Ch): Supported 00:08:04.154 Unknown (12h): Supported 00:08:04.154 Copy (19h): Supported LBA-Change 00:08:04.154 Unknown (1Dh): Supported LBA-Change 00:08:04.154 00:08:04.154 Error Log 00:08:04.154 ========= 00:08:04.154 00:08:04.154 Arbitration 00:08:04.154 =========== 00:08:04.154 Arbitration Burst: no limit 00:08:04.154 00:08:04.154 Power Management 00:08:04.154 ================ 00:08:04.154 Number of Power States: 1 00:08:04.154 Current Power State: Power State #0 00:08:04.154 Power State #0: 00:08:04.154 Max Power: 25.00 W 00:08:04.154 Non-Operational State: Operational 00:08:04.154 Entry Latency: 16 microseconds 00:08:04.154 Exit Latency: 4 microseconds 00:08:04.154 Relative Read Throughput: 0 00:08:04.154 Relative Read Latency: 0 00:08:04.154 Relative Write Throughput: 0 00:08:04.154 Relative Write Latency: 0 00:08:04.154 Idle Power: Not Reported 00:08:04.154 Active Power: Not Reported 00:08:04.154 Non-Operational Permissive Mode: Not Supported 00:08:04.154 00:08:04.154 Health Information 00:08:04.154 ================== 00:08:04.154 Critical Warnings: 00:08:04.154 Available Spare Space: OK 00:08:04.154 Temperature: OK 00:08:04.154 Device Reliability: OK 00:08:04.154 Read Only: No 00:08:04.155 Volatile Memory Backup: OK 00:08:04.155 Current Temperature: 323 Kelvin (50 Celsius) 00:08:04.155 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:04.155 Available Spare: 0% 00:08:04.155 Available Spare Threshold: 0% 00:08:04.155 Life Percentage Used: 0% 00:08:04.155 Data Units Read: 804 00:08:04.155 Data Units Written: 733 00:08:04.155 Host Read Commands: 40379 00:08:04.155 Host Write Commands: 39802 00:08:04.155 Controller Busy Time: 0 minutes 00:08:04.155 Power Cycles: 0 00:08:04.155 Power On Hours: 0 hours 00:08:04.155 Unsafe Shutdowns: 0 00:08:04.155 Unrecoverable Media Errors: 0 00:08:04.155 Lifetime Error Log Entries: 0 00:08:04.155 Warning Temperature Time: 0 minutes 00:08:04.155 Critical Temperature Time: 0 minutes 00:08:04.155 00:08:04.155 Number of Queues 00:08:04.155 ================ 00:08:04.155 Number of I/O Submission Queues: 64 00:08:04.155 Number of I/O Completion Queues: 64 00:08:04.155 00:08:04.155 ZNS Specific Controller Data 00:08:04.155 ============================ 00:08:04.155 Zone Append Size Limit: 0 00:08:04.155 00:08:04.155 00:08:04.155 Active Namespaces 00:08:04.155 ================= 00:08:04.155 Namespace ID:1 00:08:04.155 Error Recovery Timeout: Unlimited 00:08:04.155 Command Set Identifier: NVM (00h) 00:08:04.155 Deallocate: Supported 00:08:04.155 Deallocated/Unwritten Error: Supported 00:08:04.155 Deallocated Read Value: All 0x00 00:08:04.155 Deallocate in Write Zeroes: Not Supported 00:08:04.155 Deallocated Guard Field: 0xFFFF 00:08:04.155 Flush: Supported 00:08:04.155 Reservation: Not Supported 00:08:04.155 Namespace Sharing Capabilities: Multiple Controllers 00:08:04.155 Size (in LBAs): 262144 (1GiB) 00:08:04.155 Capacity (in LBAs): 262144 (1GiB) 00:08:04.155 Utilization (in LBAs): 262144 (1GiB) 00:08:04.155 Thin Provisioning: Not Supported 00:08:04.155 Per-NS Atomic Units: No 00:08:04.155 Maximum Single Source Range Length: 128 00:08:04.155 Maximum Copy Length: 128 00:08:04.155 Maximum Source Range Count: 128 00:08:04.155 NGUID/EUI64 Never Reused: No 00:08:04.155 Namespace Write Protected: No 00:08:04.155 Endurance group ID: 1 00:08:04.155 Number of LBA Formats: 8 00:08:04.155 Current LBA Format: LBA Format #04 00:08:04.155 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:04.155 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:04.155 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:04.155 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:04.155 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:04.155 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:04.155 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:04.155 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:04.155 00:08:04.155 Get Feature FDP: 00:08:04.155 ================ 00:08:04.155 Enabled: Yes 00:08:04.155 FDP configuration index: 0 00:08:04.155 00:08:04.155 FDP configurations log page 00:08:04.155 =========================== 00:08:04.155 Number of FDP configurations: 1 00:08:04.155 Version: 0 00:08:04.155 Size: 112 00:08:04.155 FDP Configuration Descriptor: 0 00:08:04.155 Descriptor Size: 96 00:08:04.155 Reclaim Group Identifier format: 2 00:08:04.155 FDP Volatile Write Cache: Not Present 00:08:04.155 FDP Configuration: Valid 00:08:04.155 Vendor Specific Size: 0 00:08:04.155 Number of Reclaim Groups: 2 00:08:04.155 Number of Recalim Unit Handles: 8 00:08:04.155 Max Placement Identifiers: 128 00:08:04.155 Number of Namespaces Suppprted: 256 00:08:04.155 Reclaim unit Nominal Size: 6000000 bytes 00:08:04.155 Estimated Reclaim Unit Time Limit: Not Reported 00:08:04.155 RUH Desc #000: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #001: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #002: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #003: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #004: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #005: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #006: RUH Type: Initially Isolated 00:08:04.155 RUH Desc #007: RUH Type: Initially Isolated 00:08:04.155 00:08:04.155 FDP reclaim unit handle usage log page 00:08:04.155 ====================================== 00:08:04.155 Number of Reclaim Unit Handles: 8 00:08:04.155 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:04.155 RUH Usage Desc #001: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #002: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #003: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #004: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #005: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #006: RUH Attributes: Unused 00:08:04.155 RUH Usage Desc #007: RUH Attributes: Unused 00:08:04.155 00:08:04.155 FDP statistics log page 00:08:04.155 ======================= 00:08:04.155 Host bytes with metadata written: 466853888 00:08:04.155 Media bytes with metadata written: 466907136 00:08:04.155 Media bytes erased: 0 00:08:04.155 00:08:04.155 FDP events log page 00:08:04.155 =================== 00:08:04.155 Number of FDP events: 0 00:08:04.155 00:08:04.155 NVM Specific Namespace Data 00:08:04.155 =========================== 00:08:04.155 Logical Block Storage Tag Mask: 0 00:08:04.155 Protection Information Capabilities: 00:08:04.155 16b Guard Protection Information Storage Tag Support: No 00:08:04.155 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:04.155 Storage Tag Check Read Support: No 00:08:04.155 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:04.155 00:08:04.155 real 0m1.321s 00:08:04.155 user 0m0.563s 00:08:04.155 sys 0m0.552s 00:08:04.155 10:08:09 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.155 10:08:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:04.155 ************************************ 00:08:04.155 END TEST nvme_identify 00:08:04.155 ************************************ 00:08:04.413 10:08:09 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:04.413 10:08:09 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:04.413 10:08:09 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:04.413 10:08:09 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.413 ************************************ 00:08:04.413 START TEST nvme_perf 00:08:04.413 ************************************ 00:08:04.413 10:08:09 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:08:04.413 10:08:09 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:05.789 Initializing NVMe Controllers 00:08:05.789 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.789 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.789 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.790 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.790 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:05.790 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:05.790 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:05.790 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:05.790 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:05.790 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:05.790 Initialization complete. Launching workers. 00:08:05.790 ======================================================== 00:08:05.790 Latency(us) 00:08:05.790 Device Information : IOPS MiB/s Average min max 00:08:05.790 PCIE (0000:00:10.0) NSID 1 from core 0: 7126.97 83.52 18006.79 11542.67 49394.43 00:08:05.790 PCIE (0000:00:11.0) NSID 1 from core 0: 7126.97 83.52 17967.60 11277.14 46647.17 00:08:05.790 PCIE (0000:00:13.0) NSID 1 from core 0: 7126.97 83.52 17925.97 11662.93 44589.17 00:08:05.790 PCIE (0000:00:12.0) NSID 1 from core 0: 7126.97 83.52 17884.32 11732.98 42448.79 00:08:05.790 PCIE (0000:00:12.0) NSID 2 from core 0: 7126.97 83.52 17842.30 11697.59 39928.67 00:08:05.790 PCIE (0000:00:12.0) NSID 3 from core 0: 7190.60 84.26 17643.01 11529.94 30010.37 00:08:05.790 ======================================================== 00:08:05.790 Total : 42825.45 501.86 17877.98 11277.14 49394.43 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12149.366us 00:08:05.790 10.00000% : 13510.498us 00:08:05.790 25.00000% : 14821.218us 00:08:05.790 50.00000% : 17845.957us 00:08:05.790 75.00000% : 20064.098us 00:08:05.790 90.00000% : 21878.942us 00:08:05.790 95.00000% : 22786.363us 00:08:05.790 98.00000% : 23996.258us 00:08:05.790 99.00000% : 40934.794us 00:08:05.790 99.50000% : 47790.868us 00:08:05.790 99.90000% : 49202.412us 00:08:05.790 99.99000% : 49404.062us 00:08:05.790 99.99900% : 49404.062us 00:08:05.790 99.99990% : 49404.062us 00:08:05.790 99.99999% : 49404.062us 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12098.954us 00:08:05.790 10.00000% : 13611.323us 00:08:05.790 25.00000% : 14821.218us 00:08:05.790 50.00000% : 17845.957us 00:08:05.790 75.00000% : 20064.098us 00:08:05.790 90.00000% : 21878.942us 00:08:05.790 95.00000% : 22685.538us 00:08:05.790 98.00000% : 24197.908us 00:08:05.790 99.00000% : 38313.354us 00:08:05.790 99.50000% : 45169.428us 00:08:05.790 99.90000% : 46379.323us 00:08:05.790 99.99000% : 46782.622us 00:08:05.790 99.99900% : 46782.622us 00:08:05.790 99.99990% : 46782.622us 00:08:05.790 99.99999% : 46782.622us 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12098.954us 00:08:05.790 10.00000% : 13611.323us 00:08:05.790 25.00000% : 14922.043us 00:08:05.790 50.00000% : 17946.782us 00:08:05.790 75.00000% : 19963.274us 00:08:05.790 90.00000% : 21878.942us 00:08:05.790 95.00000% : 22685.538us 00:08:05.790 98.00000% : 24197.908us 00:08:05.790 99.00000% : 36095.212us 00:08:05.790 99.50000% : 43152.935us 00:08:05.790 99.90000% : 44362.831us 00:08:05.790 99.99000% : 44766.129us 00:08:05.790 99.99900% : 44766.129us 00:08:05.790 99.99990% : 44766.129us 00:08:05.790 99.99999% : 44766.129us 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12149.366us 00:08:05.790 10.00000% : 13409.674us 00:08:05.790 25.00000% : 15022.868us 00:08:05.790 50.00000% : 17845.957us 00:08:05.790 75.00000% : 19963.274us 00:08:05.790 90.00000% : 21878.942us 00:08:05.790 95.00000% : 22786.363us 00:08:05.790 98.00000% : 23895.434us 00:08:05.790 99.00000% : 33473.772us 00:08:05.790 99.50000% : 41136.443us 00:08:05.790 99.90000% : 42346.338us 00:08:05.790 99.99000% : 42547.988us 00:08:05.790 99.99900% : 42547.988us 00:08:05.790 99.99990% : 42547.988us 00:08:05.790 99.99999% : 42547.988us 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12199.778us 00:08:05.790 10.00000% : 13510.498us 00:08:05.790 25.00000% : 14922.043us 00:08:05.790 50.00000% : 17845.957us 00:08:05.790 75.00000% : 19963.274us 00:08:05.790 90.00000% : 21979.766us 00:08:05.790 95.00000% : 22988.012us 00:08:05.790 98.00000% : 24197.908us 00:08:05.790 99.00000% : 30650.683us 00:08:05.790 99.50000% : 38515.003us 00:08:05.790 99.90000% : 39724.898us 00:08:05.790 99.99000% : 40128.197us 00:08:05.790 99.99900% : 40128.197us 00:08:05.790 99.99990% : 40128.197us 00:08:05.790 99.99999% : 40128.197us 00:08:05.790 00:08:05.790 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.790 ================================================================================= 00:08:05.790 1.00000% : 12149.366us 00:08:05.790 10.00000% : 13611.323us 00:08:05.790 25.00000% : 14821.218us 00:08:05.790 50.00000% : 17845.957us 00:08:05.790 75.00000% : 20064.098us 00:08:05.790 90.00000% : 21778.117us 00:08:05.790 95.00000% : 22584.714us 00:08:05.790 98.00000% : 23290.486us 00:08:05.790 99.00000% : 24500.382us 00:08:05.790 99.50000% : 28634.191us 00:08:05.790 99.90000% : 29844.086us 00:08:05.790 99.99000% : 30045.735us 00:08:05.790 99.99900% : 30045.735us 00:08:05.790 99.99990% : 30045.735us 00:08:05.790 99.99999% : 30045.735us 00:08:05.790 00:08:05.790 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:05.790 ============================================================================== 00:08:05.790 Range in us Cumulative IO count 00:08:05.790 11494.006 - 11544.418: 0.0419% ( 3) 00:08:05.790 11544.418 - 11594.831: 0.0977% ( 4) 00:08:05.790 11594.831 - 11645.243: 0.1535% ( 4) 00:08:05.790 11645.243 - 11695.655: 0.2372% ( 6) 00:08:05.790 11695.655 - 11746.068: 0.2930% ( 4) 00:08:05.790 11746.068 - 11796.480: 0.3906% ( 7) 00:08:05.790 11796.480 - 11846.892: 0.4464% ( 4) 00:08:05.790 11846.892 - 11897.305: 0.5301% ( 6) 00:08:05.790 11897.305 - 11947.717: 0.6138% ( 6) 00:08:05.790 11947.717 - 11998.129: 0.6836% ( 5) 00:08:05.790 11998.129 - 12048.542: 0.8371% ( 11) 00:08:05.790 12048.542 - 12098.954: 0.9905% ( 11) 00:08:05.790 12098.954 - 12149.366: 1.1858% ( 14) 00:08:05.790 12149.366 - 12199.778: 1.2974% ( 8) 00:08:05.790 12199.778 - 12250.191: 1.5206% ( 16) 00:08:05.790 12250.191 - 12300.603: 1.5904% ( 5) 00:08:05.790 12300.603 - 12351.015: 1.8136% ( 16) 00:08:05.790 12351.015 - 12401.428: 2.0089% ( 14) 00:08:05.790 12401.428 - 12451.840: 2.2740% ( 19) 00:08:05.790 12451.840 - 12502.252: 2.5112% ( 17) 00:08:05.790 12502.252 - 12552.665: 2.7344% ( 16) 00:08:05.790 12552.665 - 12603.077: 2.9157% ( 13) 00:08:05.790 12603.077 - 12653.489: 3.1250% ( 15) 00:08:05.790 12653.489 - 12703.902: 3.4459% ( 23) 00:08:05.790 12703.902 - 12754.314: 3.7388% ( 21) 00:08:05.790 12754.314 - 12804.726: 3.9760% ( 17) 00:08:05.790 12804.726 - 12855.138: 4.2411% ( 19) 00:08:05.790 12855.138 - 12905.551: 4.5619% ( 23) 00:08:05.790 12905.551 - 13006.375: 5.4688% ( 65) 00:08:05.790 13006.375 - 13107.200: 6.1942% ( 52) 00:08:05.790 13107.200 - 13208.025: 7.0033% ( 58) 00:08:05.790 13208.025 - 13308.849: 7.8962% ( 64) 00:08:05.790 13308.849 - 13409.674: 8.8449% ( 68) 00:08:05.790 13409.674 - 13510.498: 10.0167% ( 84) 00:08:05.791 13510.498 - 13611.323: 11.1049% ( 78) 00:08:05.791 13611.323 - 13712.148: 12.0536% ( 68) 00:08:05.791 13712.148 - 13812.972: 13.5603% ( 108) 00:08:05.791 13812.972 - 13913.797: 14.7042% ( 82) 00:08:05.791 13913.797 - 14014.622: 16.0017% ( 93) 00:08:05.791 14014.622 - 14115.446: 17.1875% ( 85) 00:08:05.791 14115.446 - 14216.271: 18.4849% ( 93) 00:08:05.791 14216.271 - 14317.095: 19.6568% ( 84) 00:08:05.791 14317.095 - 14417.920: 20.4102% ( 54) 00:08:05.791 14417.920 - 14518.745: 21.6099% ( 86) 00:08:05.791 14518.745 - 14619.569: 22.7121% ( 79) 00:08:05.791 14619.569 - 14720.394: 24.0792% ( 98) 00:08:05.791 14720.394 - 14821.218: 25.2232% ( 82) 00:08:05.791 14821.218 - 14922.043: 26.3253% ( 79) 00:08:05.791 14922.043 - 15022.868: 27.4275% ( 79) 00:08:05.791 15022.868 - 15123.692: 28.2645% ( 60) 00:08:05.791 15123.692 - 15224.517: 29.3108% ( 75) 00:08:05.791 15224.517 - 15325.342: 30.1339% ( 59) 00:08:05.791 15325.342 - 15426.166: 30.9989% ( 62) 00:08:05.791 15426.166 - 15526.991: 31.6964% ( 50) 00:08:05.791 15526.991 - 15627.815: 32.3382% ( 46) 00:08:05.791 15627.815 - 15728.640: 33.0357% ( 50) 00:08:05.791 15728.640 - 15829.465: 33.6356% ( 43) 00:08:05.791 15829.465 - 15930.289: 33.9844% ( 25) 00:08:05.791 15930.289 - 16031.114: 34.5285% ( 39) 00:08:05.791 16031.114 - 16131.938: 35.0586% ( 38) 00:08:05.791 16131.938 - 16232.763: 35.6864% ( 45) 00:08:05.791 16232.763 - 16333.588: 36.3281% ( 46) 00:08:05.791 16333.588 - 16434.412: 36.9559% ( 45) 00:08:05.791 16434.412 - 16535.237: 37.6814% ( 52) 00:08:05.791 16535.237 - 16636.062: 38.4068% ( 52) 00:08:05.791 16636.062 - 16736.886: 39.2578% ( 61) 00:08:05.791 16736.886 - 16837.711: 40.1367% ( 63) 00:08:05.791 16837.711 - 16938.535: 41.1133% ( 70) 00:08:05.791 16938.535 - 17039.360: 41.9782% ( 62) 00:08:05.791 17039.360 - 17140.185: 42.9548% ( 70) 00:08:05.791 17140.185 - 17241.009: 43.8337% ( 63) 00:08:05.791 17241.009 - 17341.834: 44.8521% ( 73) 00:08:05.791 17341.834 - 17442.658: 45.9403% ( 78) 00:08:05.791 17442.658 - 17543.483: 46.8750% ( 67) 00:08:05.791 17543.483 - 17644.308: 48.1166% ( 89) 00:08:05.791 17644.308 - 17745.132: 49.2885% ( 84) 00:08:05.791 17745.132 - 17845.957: 50.4046% ( 80) 00:08:05.791 17845.957 - 17946.782: 51.4927% ( 78) 00:08:05.791 17946.782 - 18047.606: 52.7623% ( 91) 00:08:05.791 18047.606 - 18148.431: 53.8504% ( 78) 00:08:05.791 18148.431 - 18249.255: 54.8968% ( 75) 00:08:05.791 18249.255 - 18350.080: 56.2360% ( 96) 00:08:05.791 18350.080 - 18450.905: 57.4219% ( 85) 00:08:05.791 18450.905 - 18551.729: 58.5519% ( 81) 00:08:05.791 18551.729 - 18652.554: 59.4587% ( 65) 00:08:05.791 18652.554 - 18753.378: 60.7143% ( 90) 00:08:05.791 18753.378 - 18854.203: 62.1652% ( 104) 00:08:05.791 18854.203 - 18955.028: 63.1138% ( 68) 00:08:05.791 18955.028 - 19055.852: 64.0765% ( 69) 00:08:05.791 19055.852 - 19156.677: 65.3878% ( 94) 00:08:05.791 19156.677 - 19257.502: 66.5876% ( 86) 00:08:05.791 19257.502 - 19358.326: 67.8013% ( 87) 00:08:05.791 19358.326 - 19459.151: 68.8895% ( 78) 00:08:05.791 19459.151 - 19559.975: 70.1172% ( 88) 00:08:05.791 19559.975 - 19660.800: 71.3170% ( 86) 00:08:05.791 19660.800 - 19761.625: 72.3354% ( 73) 00:08:05.791 19761.625 - 19862.449: 73.4654% ( 81) 00:08:05.791 19862.449 - 19963.274: 74.5117% ( 75) 00:08:05.791 19963.274 - 20064.098: 75.5301% ( 73) 00:08:05.791 20064.098 - 20164.923: 76.4927% ( 69) 00:08:05.791 20164.923 - 20265.748: 77.3298% ( 60) 00:08:05.791 20265.748 - 20366.572: 78.2506% ( 66) 00:08:05.791 20366.572 - 20467.397: 79.1992% ( 68) 00:08:05.791 20467.397 - 20568.222: 80.1897% ( 71) 00:08:05.791 20568.222 - 20669.046: 80.9152% ( 52) 00:08:05.791 20669.046 - 20769.871: 81.7801% ( 62) 00:08:05.791 20769.871 - 20870.695: 82.5195% ( 53) 00:08:05.791 20870.695 - 20971.520: 83.3008% ( 56) 00:08:05.791 20971.520 - 21072.345: 84.2355% ( 67) 00:08:05.791 21072.345 - 21173.169: 84.9330% ( 50) 00:08:05.791 21173.169 - 21273.994: 85.7701% ( 60) 00:08:05.791 21273.994 - 21374.818: 86.5374% ( 55) 00:08:05.791 21374.818 - 21475.643: 87.2070% ( 48) 00:08:05.791 21475.643 - 21576.468: 88.0720% ( 62) 00:08:05.791 21576.468 - 21677.292: 88.6300% ( 40) 00:08:05.791 21677.292 - 21778.117: 89.4531% ( 59) 00:08:05.791 21778.117 - 21878.942: 90.1228% ( 48) 00:08:05.791 21878.942 - 21979.766: 90.6529% ( 38) 00:08:05.791 21979.766 - 22080.591: 91.1412% ( 35) 00:08:05.791 22080.591 - 22181.415: 91.7550% ( 44) 00:08:05.791 22181.415 - 22282.240: 92.3968% ( 46) 00:08:05.791 22282.240 - 22383.065: 93.1222% ( 52) 00:08:05.791 22383.065 - 22483.889: 93.5547% ( 31) 00:08:05.791 22483.889 - 22584.714: 94.2941% ( 53) 00:08:05.791 22584.714 - 22685.538: 94.7126% ( 30) 00:08:05.791 22685.538 - 22786.363: 95.0753% ( 26) 00:08:05.791 22786.363 - 22887.188: 95.4939% ( 30) 00:08:05.791 22887.188 - 22988.012: 95.9124% ( 30) 00:08:05.791 22988.012 - 23088.837: 96.3588% ( 32) 00:08:05.791 23088.837 - 23189.662: 96.7913% ( 31) 00:08:05.791 23189.662 - 23290.486: 97.0145% ( 16) 00:08:05.791 23290.486 - 23391.311: 97.2656% ( 18) 00:08:05.791 23391.311 - 23492.135: 97.4888% ( 16) 00:08:05.791 23492.135 - 23592.960: 97.6702% ( 13) 00:08:05.791 23592.960 - 23693.785: 97.7400% ( 5) 00:08:05.791 23693.785 - 23794.609: 97.8795% ( 10) 00:08:05.791 23895.434 - 23996.258: 98.0190% ( 10) 00:08:05.791 24097.083 - 24197.908: 98.0329% ( 1) 00:08:05.791 24197.908 - 24298.732: 98.0608% ( 2) 00:08:05.791 24298.732 - 24399.557: 98.1027% ( 3) 00:08:05.791 24500.382 - 24601.206: 98.1306% ( 2) 00:08:05.791 24601.206 - 24702.031: 98.1724% ( 3) 00:08:05.791 24802.855 - 24903.680: 98.2143% ( 3) 00:08:05.791 38313.354 - 38515.003: 98.2840% ( 5) 00:08:05.791 38515.003 - 38716.652: 98.3398% ( 4) 00:08:05.791 38716.652 - 38918.302: 98.4235% ( 6) 00:08:05.791 38918.302 - 39119.951: 98.4375% ( 1) 00:08:05.791 39119.951 - 39321.600: 98.5631% ( 9) 00:08:05.791 39321.600 - 39523.249: 98.5910% ( 2) 00:08:05.791 39523.249 - 39724.898: 98.6468% ( 4) 00:08:05.791 39724.898 - 39926.548: 98.7165% ( 5) 00:08:05.791 39926.548 - 40128.197: 98.7863% ( 5) 00:08:05.791 40128.197 - 40329.846: 98.8281% ( 3) 00:08:05.791 40329.846 - 40531.495: 98.8839% ( 4) 00:08:05.791 40531.495 - 40733.145: 98.9537% ( 5) 00:08:05.791 40733.145 - 40934.794: 99.0095% ( 4) 00:08:05.791 40934.794 - 41136.443: 99.0792% ( 5) 00:08:05.791 41136.443 - 41338.092: 99.1071% ( 2) 00:08:05.791 46379.323 - 46580.972: 99.1629% ( 4) 00:08:05.791 46580.972 - 46782.622: 99.2048% ( 3) 00:08:05.791 46782.622 - 46984.271: 99.2885% ( 6) 00:08:05.791 46984.271 - 47185.920: 99.3304% ( 3) 00:08:05.791 47185.920 - 47387.569: 99.3862% ( 4) 00:08:05.791 47387.569 - 47589.218: 99.4559% ( 5) 00:08:05.791 47589.218 - 47790.868: 99.5117% ( 4) 00:08:05.791 47790.868 - 47992.517: 99.5815% ( 5) 00:08:05.791 47992.517 - 48194.166: 99.6373% ( 4) 00:08:05.791 48194.166 - 48395.815: 99.6931% ( 4) 00:08:05.791 48395.815 - 48597.465: 99.7489% ( 4) 00:08:05.791 48597.465 - 48799.114: 99.8186% ( 5) 00:08:05.791 48799.114 - 49000.763: 99.8605% ( 3) 00:08:05.791 49000.763 - 49202.412: 99.9302% ( 5) 00:08:05.791 49202.412 - 49404.062: 100.0000% ( 5) 00:08:05.791 00:08:05.791 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:05.791 ============================================================================== 00:08:05.791 Range in us Cumulative IO count 00:08:05.791 11241.945 - 11292.357: 0.0140% ( 1) 00:08:05.791 11292.357 - 11342.769: 0.0419% ( 2) 00:08:05.791 11342.769 - 11393.182: 0.0698% ( 2) 00:08:05.791 11393.182 - 11443.594: 0.1116% ( 3) 00:08:05.791 11443.594 - 11494.006: 0.1256% ( 1) 00:08:05.791 11494.006 - 11544.418: 0.1814% ( 4) 00:08:05.791 11544.418 - 11594.831: 0.2232% ( 3) 00:08:05.791 11594.831 - 11645.243: 0.2930% ( 5) 00:08:05.791 11645.243 - 11695.655: 0.3348% ( 3) 00:08:05.791 11695.655 - 11746.068: 0.4185% ( 6) 00:08:05.791 11746.068 - 11796.480: 0.5162% ( 7) 00:08:05.791 11796.480 - 11846.892: 0.5999% ( 6) 00:08:05.791 11846.892 - 11897.305: 0.6975% ( 7) 00:08:05.791 11897.305 - 11947.717: 0.8092% ( 8) 00:08:05.791 11947.717 - 11998.129: 0.8789% ( 5) 00:08:05.791 11998.129 - 12048.542: 0.9905% ( 8) 00:08:05.791 12048.542 - 12098.954: 1.1021% ( 8) 00:08:05.791 12098.954 - 12149.366: 1.1998% ( 7) 00:08:05.791 12149.366 - 12199.778: 1.3114% ( 8) 00:08:05.791 12199.778 - 12250.191: 1.4648% ( 11) 00:08:05.791 12250.191 - 12300.603: 1.7160% ( 18) 00:08:05.791 12300.603 - 12351.015: 1.8834% ( 12) 00:08:05.791 12351.015 - 12401.428: 2.0787% ( 14) 00:08:05.791 12401.428 - 12451.840: 2.3019% ( 16) 00:08:05.791 12451.840 - 12502.252: 2.5391% ( 17) 00:08:05.791 12502.252 - 12552.665: 2.8181% ( 20) 00:08:05.791 12552.665 - 12603.077: 2.9994% ( 13) 00:08:05.791 12603.077 - 12653.489: 3.1808% ( 13) 00:08:05.791 12653.489 - 12703.902: 3.4040% ( 16) 00:08:05.791 12703.902 - 12754.314: 3.6133% ( 15) 00:08:05.791 12754.314 - 12804.726: 3.8504% ( 17) 00:08:05.791 12804.726 - 12855.138: 4.0737% ( 16) 00:08:05.791 12855.138 - 12905.551: 4.3108% ( 17) 00:08:05.791 12905.551 - 13006.375: 4.9107% ( 43) 00:08:05.791 13006.375 - 13107.200: 5.6641% ( 54) 00:08:05.791 13107.200 - 13208.025: 6.6267% ( 69) 00:08:05.791 13208.025 - 13308.849: 7.7427% ( 80) 00:08:05.791 13308.849 - 13409.674: 8.8030% ( 76) 00:08:05.791 13409.674 - 13510.498: 9.9470% ( 82) 00:08:05.791 13510.498 - 13611.323: 11.3002% ( 97) 00:08:05.791 13611.323 - 13712.148: 12.6395% ( 96) 00:08:05.791 13712.148 - 13812.972: 13.9927% ( 97) 00:08:05.791 13812.972 - 13913.797: 15.2483% ( 90) 00:08:05.791 13913.797 - 14014.622: 16.4900% ( 89) 00:08:05.791 14014.622 - 14115.446: 17.6897% ( 86) 00:08:05.792 14115.446 - 14216.271: 18.8616% ( 84) 00:08:05.792 14216.271 - 14317.095: 19.9777% ( 80) 00:08:05.792 14317.095 - 14417.920: 21.1775% ( 86) 00:08:05.792 14417.920 - 14518.745: 22.3214% ( 82) 00:08:05.792 14518.745 - 14619.569: 23.4235% ( 79) 00:08:05.792 14619.569 - 14720.394: 24.3443% ( 66) 00:08:05.792 14720.394 - 14821.218: 25.2093% ( 62) 00:08:05.792 14821.218 - 14922.043: 26.0045% ( 57) 00:08:05.792 14922.043 - 15022.868: 26.7857% ( 56) 00:08:05.792 15022.868 - 15123.692: 27.6088% ( 59) 00:08:05.792 15123.692 - 15224.517: 28.5854% ( 70) 00:08:05.792 15224.517 - 15325.342: 29.3806% ( 57) 00:08:05.792 15325.342 - 15426.166: 30.0781% ( 50) 00:08:05.792 15426.166 - 15526.991: 30.7617% ( 49) 00:08:05.792 15526.991 - 15627.815: 31.4453% ( 49) 00:08:05.792 15627.815 - 15728.640: 32.1429% ( 50) 00:08:05.792 15728.640 - 15829.465: 32.8265% ( 49) 00:08:05.792 15829.465 - 15930.289: 33.4821% ( 47) 00:08:05.792 15930.289 - 16031.114: 34.0820% ( 43) 00:08:05.792 16031.114 - 16131.938: 34.7098% ( 45) 00:08:05.792 16131.938 - 16232.763: 35.2958% ( 42) 00:08:05.792 16232.763 - 16333.588: 35.9235% ( 45) 00:08:05.792 16333.588 - 16434.412: 36.5513% ( 45) 00:08:05.792 16434.412 - 16535.237: 37.3047% ( 54) 00:08:05.792 16535.237 - 16636.062: 38.2394% ( 67) 00:08:05.792 16636.062 - 16736.886: 39.1462% ( 65) 00:08:05.792 16736.886 - 16837.711: 39.9414% ( 57) 00:08:05.792 16837.711 - 16938.535: 40.8901% ( 68) 00:08:05.792 16938.535 - 17039.360: 41.7411% ( 61) 00:08:05.792 17039.360 - 17140.185: 42.6200% ( 63) 00:08:05.792 17140.185 - 17241.009: 43.4570% ( 60) 00:08:05.792 17241.009 - 17341.834: 44.3917% ( 67) 00:08:05.792 17341.834 - 17442.658: 45.4381% ( 75) 00:08:05.792 17442.658 - 17543.483: 46.5820% ( 82) 00:08:05.792 17543.483 - 17644.308: 47.7260% ( 82) 00:08:05.792 17644.308 - 17745.132: 48.8281% ( 79) 00:08:05.792 17745.132 - 17845.957: 50.0140% ( 85) 00:08:05.792 17845.957 - 17946.782: 51.1998% ( 85) 00:08:05.792 17946.782 - 18047.606: 52.4693% ( 91) 00:08:05.792 18047.606 - 18148.431: 53.7667% ( 93) 00:08:05.792 18148.431 - 18249.255: 54.9665% ( 86) 00:08:05.792 18249.255 - 18350.080: 56.2500% ( 92) 00:08:05.792 18350.080 - 18450.905: 57.4219% ( 84) 00:08:05.792 18450.905 - 18551.729: 58.6217% ( 86) 00:08:05.792 18551.729 - 18652.554: 59.8075% ( 85) 00:08:05.792 18652.554 - 18753.378: 61.0073% ( 86) 00:08:05.792 18753.378 - 18854.203: 62.1931% ( 85) 00:08:05.792 18854.203 - 18955.028: 63.3092% ( 80) 00:08:05.792 18955.028 - 19055.852: 64.5787% ( 91) 00:08:05.792 19055.852 - 19156.677: 65.6808% ( 79) 00:08:05.792 19156.677 - 19257.502: 66.8666% ( 85) 00:08:05.792 19257.502 - 19358.326: 67.8571% ( 71) 00:08:05.792 19358.326 - 19459.151: 68.9035% ( 75) 00:08:05.792 19459.151 - 19559.975: 69.9637% ( 76) 00:08:05.792 19559.975 - 19660.800: 70.9124% ( 68) 00:08:05.792 19660.800 - 19761.625: 71.9727% ( 76) 00:08:05.792 19761.625 - 19862.449: 73.0608% ( 78) 00:08:05.792 19862.449 - 19963.274: 74.0095% ( 68) 00:08:05.792 19963.274 - 20064.098: 75.0140% ( 72) 00:08:05.792 20064.098 - 20164.923: 75.8650% ( 61) 00:08:05.792 20164.923 - 20265.748: 76.8555% ( 71) 00:08:05.792 20265.748 - 20366.572: 77.7623% ( 65) 00:08:05.792 20366.572 - 20467.397: 78.5854% ( 59) 00:08:05.792 20467.397 - 20568.222: 79.3108% ( 52) 00:08:05.792 20568.222 - 20669.046: 80.0921% ( 56) 00:08:05.792 20669.046 - 20769.871: 81.0547% ( 69) 00:08:05.792 20769.871 - 20870.695: 81.9475% ( 64) 00:08:05.792 20870.695 - 20971.520: 82.8125% ( 62) 00:08:05.792 20971.520 - 21072.345: 83.7472% ( 67) 00:08:05.792 21072.345 - 21173.169: 84.6540% ( 65) 00:08:05.792 21173.169 - 21273.994: 85.5050% ( 61) 00:08:05.792 21273.994 - 21374.818: 86.3281% ( 59) 00:08:05.792 21374.818 - 21475.643: 87.1512% ( 59) 00:08:05.792 21475.643 - 21576.468: 87.9883% ( 60) 00:08:05.792 21576.468 - 21677.292: 88.8114% ( 59) 00:08:05.792 21677.292 - 21778.117: 89.6763% ( 62) 00:08:05.792 21778.117 - 21878.942: 90.5134% ( 60) 00:08:05.792 21878.942 - 21979.766: 91.3504% ( 60) 00:08:05.792 21979.766 - 22080.591: 92.0759% ( 52) 00:08:05.792 22080.591 - 22181.415: 92.7176% ( 46) 00:08:05.792 22181.415 - 22282.240: 93.3454% ( 45) 00:08:05.792 22282.240 - 22383.065: 93.7919% ( 32) 00:08:05.792 22383.065 - 22483.889: 94.2941% ( 36) 00:08:05.792 22483.889 - 22584.714: 94.7684% ( 34) 00:08:05.792 22584.714 - 22685.538: 95.2288% ( 33) 00:08:05.792 22685.538 - 22786.363: 95.7310% ( 36) 00:08:05.792 22786.363 - 22887.188: 96.1635% ( 31) 00:08:05.792 22887.188 - 22988.012: 96.4844% ( 23) 00:08:05.792 22988.012 - 23088.837: 96.7773% ( 21) 00:08:05.792 23088.837 - 23189.662: 97.0145% ( 17) 00:08:05.792 23189.662 - 23290.486: 97.1680% ( 11) 00:08:05.792 23290.486 - 23391.311: 97.3214% ( 11) 00:08:05.792 23391.311 - 23492.135: 97.4609% ( 10) 00:08:05.792 23492.135 - 23592.960: 97.5586% ( 7) 00:08:05.792 23592.960 - 23693.785: 97.6702% ( 8) 00:08:05.792 23693.785 - 23794.609: 97.7400% ( 5) 00:08:05.792 23794.609 - 23895.434: 97.8097% ( 5) 00:08:05.792 23895.434 - 23996.258: 97.8795% ( 5) 00:08:05.792 23996.258 - 24097.083: 97.9492% ( 5) 00:08:05.792 24097.083 - 24197.908: 98.0190% ( 5) 00:08:05.792 24197.908 - 24298.732: 98.0748% ( 4) 00:08:05.792 24298.732 - 24399.557: 98.1585% ( 6) 00:08:05.792 24399.557 - 24500.382: 98.2003% ( 3) 00:08:05.792 24500.382 - 24601.206: 98.2143% ( 1) 00:08:05.792 35691.914 - 35893.563: 98.2561% ( 3) 00:08:05.792 35893.563 - 36095.212: 98.3259% ( 5) 00:08:05.792 36095.212 - 36296.862: 98.3817% ( 4) 00:08:05.792 36296.862 - 36498.511: 98.4515% ( 5) 00:08:05.792 36498.511 - 36700.160: 98.5212% ( 5) 00:08:05.792 36700.160 - 36901.809: 98.5770% ( 4) 00:08:05.792 36901.809 - 37103.458: 98.6468% ( 5) 00:08:05.792 37103.458 - 37305.108: 98.7165% ( 5) 00:08:05.792 37305.108 - 37506.757: 98.7863% ( 5) 00:08:05.792 37506.757 - 37708.406: 98.8560% ( 5) 00:08:05.792 37708.406 - 37910.055: 98.9258% ( 5) 00:08:05.792 37910.055 - 38111.705: 98.9955% ( 5) 00:08:05.792 38111.705 - 38313.354: 99.0653% ( 5) 00:08:05.792 38313.354 - 38515.003: 99.1071% ( 3) 00:08:05.792 43757.883 - 43959.532: 99.1211% ( 1) 00:08:05.792 43959.532 - 44161.182: 99.1908% ( 5) 00:08:05.792 44161.182 - 44362.831: 99.2606% ( 5) 00:08:05.792 44362.831 - 44564.480: 99.3304% ( 5) 00:08:05.792 44564.480 - 44766.129: 99.3862% ( 4) 00:08:05.792 44766.129 - 44967.778: 99.4420% ( 4) 00:08:05.792 44967.778 - 45169.428: 99.5117% ( 5) 00:08:05.792 45169.428 - 45371.077: 99.5675% ( 4) 00:08:05.792 45371.077 - 45572.726: 99.6373% ( 5) 00:08:05.792 45572.726 - 45774.375: 99.7070% ( 5) 00:08:05.792 45774.375 - 45976.025: 99.7768% ( 5) 00:08:05.792 45976.025 - 46177.674: 99.8326% ( 4) 00:08:05.792 46177.674 - 46379.323: 99.9023% ( 5) 00:08:05.792 46379.323 - 46580.972: 99.9721% ( 5) 00:08:05.792 46580.972 - 46782.622: 100.0000% ( 2) 00:08:05.792 00:08:05.792 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:05.792 ============================================================================== 00:08:05.792 Range in us Cumulative IO count 00:08:05.792 11645.243 - 11695.655: 0.0558% ( 4) 00:08:05.792 11695.655 - 11746.068: 0.1116% ( 4) 00:08:05.792 11746.068 - 11796.480: 0.2093% ( 7) 00:08:05.792 11796.480 - 11846.892: 0.3209% ( 8) 00:08:05.792 11846.892 - 11897.305: 0.4325% ( 8) 00:08:05.792 11897.305 - 11947.717: 0.5580% ( 9) 00:08:05.792 11947.717 - 11998.129: 0.7254% ( 12) 00:08:05.792 11998.129 - 12048.542: 0.8650% ( 10) 00:08:05.792 12048.542 - 12098.954: 1.0463% ( 13) 00:08:05.792 12098.954 - 12149.366: 1.1858% ( 10) 00:08:05.792 12149.366 - 12199.778: 1.3672% ( 13) 00:08:05.792 12199.778 - 12250.191: 1.5346% ( 12) 00:08:05.792 12250.191 - 12300.603: 1.7439% ( 15) 00:08:05.792 12300.603 - 12351.015: 1.9531% ( 15) 00:08:05.792 12351.015 - 12401.428: 2.1484% ( 14) 00:08:05.792 12401.428 - 12451.840: 2.4275% ( 20) 00:08:05.792 12451.840 - 12502.252: 2.6925% ( 19) 00:08:05.792 12502.252 - 12552.665: 3.0134% ( 23) 00:08:05.792 12552.665 - 12603.077: 3.3343% ( 23) 00:08:05.792 12603.077 - 12653.489: 3.6551% ( 23) 00:08:05.792 12653.489 - 12703.902: 3.9621% ( 22) 00:08:05.792 12703.902 - 12754.314: 4.2690% ( 22) 00:08:05.792 12754.314 - 12804.726: 4.6177% ( 25) 00:08:05.792 12804.726 - 12855.138: 4.9247% ( 22) 00:08:05.792 12855.138 - 12905.551: 5.2595% ( 24) 00:08:05.792 12905.551 - 13006.375: 5.8733% ( 44) 00:08:05.792 13006.375 - 13107.200: 6.5848% ( 51) 00:08:05.792 13107.200 - 13208.025: 7.2963% ( 51) 00:08:05.792 13208.025 - 13308.849: 8.0915% ( 57) 00:08:05.792 13308.849 - 13409.674: 8.9844% ( 64) 00:08:05.792 13409.674 - 13510.498: 9.7935% ( 58) 00:08:05.792 13510.498 - 13611.323: 10.6585% ( 62) 00:08:05.792 13611.323 - 13712.148: 11.6908% ( 74) 00:08:05.792 13712.148 - 13812.972: 12.7232% ( 74) 00:08:05.792 13812.972 - 13913.797: 13.8253% ( 79) 00:08:05.792 13913.797 - 14014.622: 14.9275% ( 79) 00:08:05.793 14014.622 - 14115.446: 16.0993% ( 84) 00:08:05.793 14115.446 - 14216.271: 17.3270% ( 88) 00:08:05.793 14216.271 - 14317.095: 18.4989% ( 84) 00:08:05.793 14317.095 - 14417.920: 19.5731% ( 77) 00:08:05.793 14417.920 - 14518.745: 20.7729% ( 86) 00:08:05.793 14518.745 - 14619.569: 22.1261% ( 97) 00:08:05.793 14619.569 - 14720.394: 23.2980% ( 84) 00:08:05.793 14720.394 - 14821.218: 24.4280% ( 81) 00:08:05.793 14821.218 - 14922.043: 25.6138% ( 85) 00:08:05.793 14922.043 - 15022.868: 26.7578% ( 82) 00:08:05.793 15022.868 - 15123.692: 27.7204% ( 69) 00:08:05.793 15123.692 - 15224.517: 28.6551% ( 67) 00:08:05.793 15224.517 - 15325.342: 29.5340% ( 63) 00:08:05.793 15325.342 - 15426.166: 30.3013% ( 55) 00:08:05.793 15426.166 - 15526.991: 31.0826% ( 56) 00:08:05.793 15526.991 - 15627.815: 31.7522% ( 48) 00:08:05.793 15627.815 - 15728.640: 32.3800% ( 45) 00:08:05.793 15728.640 - 15829.465: 33.0497% ( 48) 00:08:05.793 15829.465 - 15930.289: 33.6914% ( 46) 00:08:05.793 15930.289 - 16031.114: 34.2773% ( 42) 00:08:05.793 16031.114 - 16131.938: 34.9609% ( 49) 00:08:05.793 16131.938 - 16232.763: 35.6864% ( 52) 00:08:05.793 16232.763 - 16333.588: 36.2863% ( 43) 00:08:05.793 16333.588 - 16434.412: 36.8025% ( 37) 00:08:05.793 16434.412 - 16535.237: 37.3326% ( 38) 00:08:05.793 16535.237 - 16636.062: 37.9185% ( 42) 00:08:05.793 16636.062 - 16736.886: 38.5463% ( 45) 00:08:05.793 16736.886 - 16837.711: 39.2718% ( 52) 00:08:05.793 16837.711 - 16938.535: 40.1786% ( 65) 00:08:05.793 16938.535 - 17039.360: 41.0575% ( 63) 00:08:05.793 17039.360 - 17140.185: 41.9224% ( 62) 00:08:05.793 17140.185 - 17241.009: 42.8990% ( 70) 00:08:05.793 17241.009 - 17341.834: 43.9174% ( 73) 00:08:05.793 17341.834 - 17442.658: 45.0474% ( 81) 00:08:05.793 17442.658 - 17543.483: 46.2054% ( 83) 00:08:05.793 17543.483 - 17644.308: 47.4330% ( 88) 00:08:05.793 17644.308 - 17745.132: 48.6189% ( 85) 00:08:05.793 17745.132 - 17845.957: 49.8326% ( 87) 00:08:05.793 17845.957 - 17946.782: 51.0603% ( 88) 00:08:05.793 17946.782 - 18047.606: 52.2182% ( 83) 00:08:05.793 18047.606 - 18148.431: 53.5017% ( 92) 00:08:05.793 18148.431 - 18249.255: 54.7852% ( 92) 00:08:05.793 18249.255 - 18350.080: 56.0407% ( 90) 00:08:05.793 18350.080 - 18450.905: 57.2824% ( 89) 00:08:05.793 18450.905 - 18551.729: 58.4821% ( 86) 00:08:05.793 18551.729 - 18652.554: 59.6819% ( 86) 00:08:05.793 18652.554 - 18753.378: 60.7980% ( 80) 00:08:05.793 18753.378 - 18854.203: 62.0117% ( 87) 00:08:05.793 18854.203 - 18955.028: 63.3510% ( 96) 00:08:05.793 18955.028 - 19055.852: 64.6763% ( 95) 00:08:05.793 19055.852 - 19156.677: 65.9459% ( 91) 00:08:05.793 19156.677 - 19257.502: 67.2991% ( 97) 00:08:05.793 19257.502 - 19358.326: 68.4989% ( 86) 00:08:05.793 19358.326 - 19459.151: 69.6987% ( 86) 00:08:05.793 19459.151 - 19559.975: 70.8845% ( 85) 00:08:05.793 19559.975 - 19660.800: 72.0006% ( 80) 00:08:05.793 19660.800 - 19761.625: 73.0608% ( 76) 00:08:05.793 19761.625 - 19862.449: 74.1211% ( 76) 00:08:05.793 19862.449 - 19963.274: 75.0837% ( 69) 00:08:05.793 19963.274 - 20064.098: 76.0603% ( 70) 00:08:05.793 20064.098 - 20164.923: 76.8834% ( 59) 00:08:05.793 20164.923 - 20265.748: 77.7344% ( 61) 00:08:05.793 20265.748 - 20366.572: 78.7109% ( 70) 00:08:05.793 20366.572 - 20467.397: 79.5201% ( 58) 00:08:05.793 20467.397 - 20568.222: 80.3850% ( 62) 00:08:05.793 20568.222 - 20669.046: 81.3058% ( 66) 00:08:05.793 20669.046 - 20769.871: 82.2684% ( 69) 00:08:05.793 20769.871 - 20870.695: 83.0915% ( 59) 00:08:05.793 20870.695 - 20971.520: 83.9425% ( 61) 00:08:05.793 20971.520 - 21072.345: 84.7796% ( 60) 00:08:05.793 21072.345 - 21173.169: 85.6027% ( 59) 00:08:05.793 21173.169 - 21273.994: 86.3979% ( 57) 00:08:05.793 21273.994 - 21374.818: 87.1791% ( 56) 00:08:05.793 21374.818 - 21475.643: 87.9464% ( 55) 00:08:05.793 21475.643 - 21576.468: 88.7277% ( 56) 00:08:05.793 21576.468 - 21677.292: 89.3555% ( 45) 00:08:05.793 21677.292 - 21778.117: 89.9693% ( 44) 00:08:05.793 21778.117 - 21878.942: 90.6250% ( 47) 00:08:05.793 21878.942 - 21979.766: 91.2249% ( 43) 00:08:05.793 21979.766 - 22080.591: 91.8527% ( 45) 00:08:05.793 22080.591 - 22181.415: 92.4526% ( 43) 00:08:05.793 22181.415 - 22282.240: 93.0385% ( 42) 00:08:05.793 22282.240 - 22383.065: 93.5826% ( 39) 00:08:05.793 22383.065 - 22483.889: 94.0709% ( 35) 00:08:05.793 22483.889 - 22584.714: 94.5871% ( 37) 00:08:05.793 22584.714 - 22685.538: 95.0474% ( 33) 00:08:05.793 22685.538 - 22786.363: 95.5497% ( 36) 00:08:05.793 22786.363 - 22887.188: 96.0100% ( 33) 00:08:05.793 22887.188 - 22988.012: 96.4286% ( 30) 00:08:05.793 22988.012 - 23088.837: 96.7494% ( 23) 00:08:05.793 23088.837 - 23189.662: 96.9587% ( 15) 00:08:05.793 23189.662 - 23290.486: 97.1122% ( 11) 00:08:05.793 23290.486 - 23391.311: 97.3075% ( 14) 00:08:05.793 23391.311 - 23492.135: 97.4330% ( 9) 00:08:05.793 23492.135 - 23592.960: 97.5725% ( 10) 00:08:05.793 23592.960 - 23693.785: 97.6423% ( 5) 00:08:05.793 23693.785 - 23794.609: 97.7121% ( 5) 00:08:05.793 23794.609 - 23895.434: 97.7958% ( 6) 00:08:05.793 23895.434 - 23996.258: 97.8795% ( 6) 00:08:05.793 23996.258 - 24097.083: 97.9492% ( 5) 00:08:05.793 24097.083 - 24197.908: 98.0190% ( 5) 00:08:05.793 24197.908 - 24298.732: 98.0887% ( 5) 00:08:05.793 24298.732 - 24399.557: 98.1724% ( 6) 00:08:05.793 24399.557 - 24500.382: 98.2143% ( 3) 00:08:05.793 33473.772 - 33675.422: 98.2561% ( 3) 00:08:05.793 33675.422 - 33877.071: 98.3119% ( 4) 00:08:05.793 33877.071 - 34078.720: 98.3677% ( 4) 00:08:05.793 34078.720 - 34280.369: 98.4375% ( 5) 00:08:05.793 34280.369 - 34482.018: 98.4933% ( 4) 00:08:05.793 34482.018 - 34683.668: 98.5631% ( 5) 00:08:05.793 34683.668 - 34885.317: 98.6328% ( 5) 00:08:05.793 34885.317 - 35086.966: 98.7026% ( 5) 00:08:05.793 35086.966 - 35288.615: 98.7584% ( 4) 00:08:05.793 35288.615 - 35490.265: 98.8281% ( 5) 00:08:05.793 35490.265 - 35691.914: 98.8979% ( 5) 00:08:05.793 35691.914 - 35893.563: 98.9676% ( 5) 00:08:05.793 35893.563 - 36095.212: 99.0374% ( 5) 00:08:05.793 36095.212 - 36296.862: 99.1071% ( 5) 00:08:05.793 41741.391 - 41943.040: 99.1350% ( 2) 00:08:05.793 41943.040 - 42144.689: 99.1908% ( 4) 00:08:05.793 42144.689 - 42346.338: 99.2606% ( 5) 00:08:05.793 42346.338 - 42547.988: 99.3304% ( 5) 00:08:05.793 42547.988 - 42749.637: 99.3862% ( 4) 00:08:05.793 42749.637 - 42951.286: 99.4559% ( 5) 00:08:05.793 42951.286 - 43152.935: 99.5257% ( 5) 00:08:05.793 43152.935 - 43354.585: 99.5954% ( 5) 00:08:05.793 43354.585 - 43556.234: 99.6652% ( 5) 00:08:05.793 43556.234 - 43757.883: 99.7210% ( 4) 00:08:05.793 43757.883 - 43959.532: 99.7907% ( 5) 00:08:05.793 43959.532 - 44161.182: 99.8605% ( 5) 00:08:05.793 44161.182 - 44362.831: 99.9163% ( 4) 00:08:05.793 44362.831 - 44564.480: 99.9860% ( 5) 00:08:05.793 44564.480 - 44766.129: 100.0000% ( 1) 00:08:05.793 00:08:05.793 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:05.793 ============================================================================== 00:08:05.793 Range in us Cumulative IO count 00:08:05.793 11695.655 - 11746.068: 0.0140% ( 1) 00:08:05.793 11746.068 - 11796.480: 0.0698% ( 4) 00:08:05.793 11796.480 - 11846.892: 0.1535% ( 6) 00:08:05.793 11846.892 - 11897.305: 0.2511% ( 7) 00:08:05.793 11897.305 - 11947.717: 0.3488% ( 7) 00:08:05.793 11947.717 - 11998.129: 0.4743% ( 9) 00:08:05.793 11998.129 - 12048.542: 0.6278% ( 11) 00:08:05.793 12048.542 - 12098.954: 0.8371% ( 15) 00:08:05.793 12098.954 - 12149.366: 1.0324% ( 14) 00:08:05.793 12149.366 - 12199.778: 1.2137% ( 13) 00:08:05.793 12199.778 - 12250.191: 1.3951% ( 13) 00:08:05.793 12250.191 - 12300.603: 1.6323% ( 17) 00:08:05.793 12300.603 - 12351.015: 1.8694% ( 17) 00:08:05.793 12351.015 - 12401.428: 2.1624% ( 21) 00:08:05.793 12401.428 - 12451.840: 2.4414% ( 20) 00:08:05.793 12451.840 - 12502.252: 2.7344% ( 21) 00:08:05.793 12502.252 - 12552.665: 3.1110% ( 27) 00:08:05.793 12552.665 - 12603.077: 3.5017% ( 28) 00:08:05.793 12603.077 - 12653.489: 3.8923% ( 28) 00:08:05.793 12653.489 - 12703.902: 4.2271% ( 24) 00:08:05.793 12703.902 - 12754.314: 4.6038% ( 27) 00:08:05.793 12754.314 - 12804.726: 4.9386% ( 24) 00:08:05.793 12804.726 - 12855.138: 5.2734% ( 24) 00:08:05.793 12855.138 - 12905.551: 5.6222% ( 25) 00:08:05.793 12905.551 - 13006.375: 6.4174% ( 57) 00:08:05.793 13006.375 - 13107.200: 7.2266% ( 58) 00:08:05.793 13107.200 - 13208.025: 8.1613% ( 67) 00:08:05.793 13208.025 - 13308.849: 9.0681% ( 65) 00:08:05.793 13308.849 - 13409.674: 10.0028% ( 67) 00:08:05.793 13409.674 - 13510.498: 10.9794% ( 70) 00:08:05.793 13510.498 - 13611.323: 11.9420% ( 69) 00:08:05.793 13611.323 - 13712.148: 12.9604% ( 73) 00:08:05.793 13712.148 - 13812.972: 13.9788% ( 73) 00:08:05.793 13812.972 - 13913.797: 15.0251% ( 75) 00:08:05.793 13913.797 - 14014.622: 16.1830% ( 83) 00:08:05.793 14014.622 - 14115.446: 17.3131% ( 81) 00:08:05.793 14115.446 - 14216.271: 18.3175% ( 72) 00:08:05.793 14216.271 - 14317.095: 19.3080% ( 71) 00:08:05.793 14317.095 - 14417.920: 20.1590% ( 61) 00:08:05.793 14417.920 - 14518.745: 21.0379% ( 63) 00:08:05.793 14518.745 - 14619.569: 21.8890% ( 61) 00:08:05.793 14619.569 - 14720.394: 22.7679% ( 63) 00:08:05.793 14720.394 - 14821.218: 23.7863% ( 73) 00:08:05.793 14821.218 - 14922.043: 24.6652% ( 63) 00:08:05.793 14922.043 - 15022.868: 25.7673% ( 79) 00:08:05.793 15022.868 - 15123.692: 26.6323% ( 62) 00:08:05.793 15123.692 - 15224.517: 27.4135% ( 56) 00:08:05.793 15224.517 - 15325.342: 28.2087% ( 57) 00:08:05.793 15325.342 - 15426.166: 29.1155% ( 65) 00:08:05.794 15426.166 - 15526.991: 29.8270% ( 51) 00:08:05.794 15526.991 - 15627.815: 30.4967% ( 48) 00:08:05.794 15627.815 - 15728.640: 31.2640% ( 55) 00:08:05.794 15728.640 - 15829.465: 32.0173% ( 54) 00:08:05.794 15829.465 - 15930.289: 32.7148% ( 50) 00:08:05.794 15930.289 - 16031.114: 33.3984% ( 49) 00:08:05.794 16031.114 - 16131.938: 34.2355% ( 60) 00:08:05.794 16131.938 - 16232.763: 35.1004% ( 62) 00:08:05.794 16232.763 - 16333.588: 35.9235% ( 59) 00:08:05.794 16333.588 - 16434.412: 36.8025% ( 63) 00:08:05.794 16434.412 - 16535.237: 37.7372% ( 67) 00:08:05.794 16535.237 - 16636.062: 38.6300% ( 64) 00:08:05.794 16636.062 - 16736.886: 39.5089% ( 63) 00:08:05.794 16736.886 - 16837.711: 40.4436% ( 67) 00:08:05.794 16837.711 - 16938.535: 41.4062% ( 69) 00:08:05.794 16938.535 - 17039.360: 42.3828% ( 70) 00:08:05.794 17039.360 - 17140.185: 43.4152% ( 74) 00:08:05.794 17140.185 - 17241.009: 44.3080% ( 64) 00:08:05.794 17241.009 - 17341.834: 45.2148% ( 65) 00:08:05.794 17341.834 - 17442.658: 46.2193% ( 72) 00:08:05.794 17442.658 - 17543.483: 47.1680% ( 68) 00:08:05.794 17543.483 - 17644.308: 48.3259% ( 83) 00:08:05.794 17644.308 - 17745.132: 49.3304% ( 72) 00:08:05.794 17745.132 - 17845.957: 50.3348% ( 72) 00:08:05.794 17845.957 - 17946.782: 51.5067% ( 84) 00:08:05.794 17946.782 - 18047.606: 52.5251% ( 73) 00:08:05.794 18047.606 - 18148.431: 53.7109% ( 85) 00:08:05.794 18148.431 - 18249.255: 54.8410% ( 81) 00:08:05.794 18249.255 - 18350.080: 55.9431% ( 79) 00:08:05.794 18350.080 - 18450.905: 57.0731% ( 81) 00:08:05.794 18450.905 - 18551.729: 58.2310% ( 83) 00:08:05.794 18551.729 - 18652.554: 59.3331% ( 79) 00:08:05.794 18652.554 - 18753.378: 60.4771% ( 82) 00:08:05.794 18753.378 - 18854.203: 61.5653% ( 78) 00:08:05.794 18854.203 - 18955.028: 62.7651% ( 86) 00:08:05.794 18955.028 - 19055.852: 63.9509% ( 85) 00:08:05.794 19055.852 - 19156.677: 65.3599% ( 101) 00:08:05.794 19156.677 - 19257.502: 66.7271% ( 98) 00:08:05.794 19257.502 - 19358.326: 67.9967% ( 91) 00:08:05.794 19358.326 - 19459.151: 69.1685% ( 84) 00:08:05.794 19459.151 - 19559.975: 70.4102% ( 89) 00:08:05.794 19559.975 - 19660.800: 71.5681% ( 83) 00:08:05.794 19660.800 - 19761.625: 72.7679% ( 86) 00:08:05.794 19761.625 - 19862.449: 73.9118% ( 82) 00:08:05.794 19862.449 - 19963.274: 75.0558% ( 82) 00:08:05.794 19963.274 - 20064.098: 76.0045% ( 68) 00:08:05.794 20064.098 - 20164.923: 76.9671% ( 69) 00:08:05.794 20164.923 - 20265.748: 77.8181% ( 61) 00:08:05.794 20265.748 - 20366.572: 78.6830% ( 62) 00:08:05.794 20366.572 - 20467.397: 79.5898% ( 65) 00:08:05.794 20467.397 - 20568.222: 80.6501% ( 76) 00:08:05.794 20568.222 - 20669.046: 81.5151% ( 62) 00:08:05.794 20669.046 - 20769.871: 82.2684% ( 54) 00:08:05.794 20769.871 - 20870.695: 83.0497% ( 56) 00:08:05.794 20870.695 - 20971.520: 83.8588% ( 58) 00:08:05.794 20971.520 - 21072.345: 84.6261% ( 55) 00:08:05.794 21072.345 - 21173.169: 85.4213% ( 57) 00:08:05.794 21173.169 - 21273.994: 86.1328% ( 51) 00:08:05.794 21273.994 - 21374.818: 86.8443% ( 51) 00:08:05.794 21374.818 - 21475.643: 87.4442% ( 43) 00:08:05.794 21475.643 - 21576.468: 88.0999% ( 47) 00:08:05.794 21576.468 - 21677.292: 88.7974% ( 50) 00:08:05.794 21677.292 - 21778.117: 89.4392% ( 46) 00:08:05.794 21778.117 - 21878.942: 90.0530% ( 44) 00:08:05.794 21878.942 - 21979.766: 90.6669% ( 44) 00:08:05.794 21979.766 - 22080.591: 91.2388% ( 41) 00:08:05.794 22080.591 - 22181.415: 91.8108% ( 41) 00:08:05.794 22181.415 - 22282.240: 92.4247% ( 44) 00:08:05.794 22282.240 - 22383.065: 93.0385% ( 44) 00:08:05.794 22383.065 - 22483.889: 93.5407% ( 36) 00:08:05.794 22483.889 - 22584.714: 94.0569% ( 37) 00:08:05.794 22584.714 - 22685.538: 94.5871% ( 38) 00:08:05.794 22685.538 - 22786.363: 95.1032% ( 37) 00:08:05.794 22786.363 - 22887.188: 95.6334% ( 38) 00:08:05.794 22887.188 - 22988.012: 96.0379% ( 29) 00:08:05.794 22988.012 - 23088.837: 96.3728% ( 24) 00:08:05.794 23088.837 - 23189.662: 96.6657% ( 21) 00:08:05.794 23189.662 - 23290.486: 96.9727% ( 22) 00:08:05.794 23290.486 - 23391.311: 97.2238% ( 18) 00:08:05.794 23391.311 - 23492.135: 97.4609% ( 17) 00:08:05.794 23492.135 - 23592.960: 97.6283% ( 12) 00:08:05.794 23592.960 - 23693.785: 97.7679% ( 10) 00:08:05.794 23693.785 - 23794.609: 97.8934% ( 9) 00:08:05.794 23794.609 - 23895.434: 98.0050% ( 8) 00:08:05.794 23895.434 - 23996.258: 98.1027% ( 7) 00:08:05.794 23996.258 - 24097.083: 98.1445% ( 3) 00:08:05.794 24097.083 - 24197.908: 98.1724% ( 2) 00:08:05.794 24197.908 - 24298.732: 98.2003% ( 2) 00:08:05.794 24298.732 - 24399.557: 98.2143% ( 1) 00:08:05.794 30852.332 - 31053.982: 98.2422% ( 2) 00:08:05.794 31053.982 - 31255.631: 98.3119% ( 5) 00:08:05.794 31255.631 - 31457.280: 98.3677% ( 4) 00:08:05.794 31457.280 - 31658.929: 98.4375% ( 5) 00:08:05.794 31658.929 - 31860.578: 98.5073% ( 5) 00:08:05.794 31860.578 - 32062.228: 98.5770% ( 5) 00:08:05.794 32062.228 - 32263.877: 98.6468% ( 5) 00:08:05.794 32263.877 - 32465.526: 98.7026% ( 4) 00:08:05.794 32465.526 - 32667.175: 98.7723% ( 5) 00:08:05.794 32667.175 - 32868.825: 98.8421% ( 5) 00:08:05.794 32868.825 - 33070.474: 98.9118% ( 5) 00:08:05.794 33070.474 - 33272.123: 98.9816% ( 5) 00:08:05.794 33272.123 - 33473.772: 99.0513% ( 5) 00:08:05.794 33473.772 - 33675.422: 99.1071% ( 4) 00:08:05.794 39724.898 - 39926.548: 99.1629% ( 4) 00:08:05.794 39926.548 - 40128.197: 99.2188% ( 4) 00:08:05.794 40128.197 - 40329.846: 99.2885% ( 5) 00:08:05.794 40329.846 - 40531.495: 99.3583% ( 5) 00:08:05.794 40531.495 - 40733.145: 99.4280% ( 5) 00:08:05.794 40733.145 - 40934.794: 99.4838% ( 4) 00:08:05.794 40934.794 - 41136.443: 99.5536% ( 5) 00:08:05.794 41136.443 - 41338.092: 99.6233% ( 5) 00:08:05.794 41338.092 - 41539.742: 99.6791% ( 4) 00:08:05.794 41539.742 - 41741.391: 99.7489% ( 5) 00:08:05.794 41741.391 - 41943.040: 99.8186% ( 5) 00:08:05.794 41943.040 - 42144.689: 99.8884% ( 5) 00:08:05.794 42144.689 - 42346.338: 99.9581% ( 5) 00:08:05.794 42346.338 - 42547.988: 100.0000% ( 3) 00:08:05.794 00:08:05.794 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:05.794 ============================================================================== 00:08:05.794 Range in us Cumulative IO count 00:08:05.794 11695.655 - 11746.068: 0.0419% ( 3) 00:08:05.794 11746.068 - 11796.480: 0.1535% ( 8) 00:08:05.794 11796.480 - 11846.892: 0.2930% ( 10) 00:08:05.794 11846.892 - 11897.305: 0.3767% ( 6) 00:08:05.794 11897.305 - 11947.717: 0.4604% ( 6) 00:08:05.794 11947.717 - 11998.129: 0.5580% ( 7) 00:08:05.794 11998.129 - 12048.542: 0.6696% ( 8) 00:08:05.794 12048.542 - 12098.954: 0.7812% ( 8) 00:08:05.794 12098.954 - 12149.366: 0.8789% ( 7) 00:08:05.794 12149.366 - 12199.778: 1.0324% ( 11) 00:08:05.794 12199.778 - 12250.191: 1.1719% ( 10) 00:08:05.794 12250.191 - 12300.603: 1.3532% ( 13) 00:08:05.794 12300.603 - 12351.015: 1.6323% ( 20) 00:08:05.794 12351.015 - 12401.428: 1.8834% ( 18) 00:08:05.794 12401.428 - 12451.840: 2.1205% ( 17) 00:08:05.794 12451.840 - 12502.252: 2.3577% ( 17) 00:08:05.794 12502.252 - 12552.665: 2.7065% ( 25) 00:08:05.794 12552.665 - 12603.077: 3.0134% ( 22) 00:08:05.794 12603.077 - 12653.489: 3.3343% ( 23) 00:08:05.794 12653.489 - 12703.902: 3.6412% ( 22) 00:08:05.794 12703.902 - 12754.314: 3.9900% ( 25) 00:08:05.794 12754.314 - 12804.726: 4.3806% ( 28) 00:08:05.794 12804.726 - 12855.138: 4.7433% ( 26) 00:08:05.794 12855.138 - 12905.551: 5.0921% ( 25) 00:08:05.794 12905.551 - 13006.375: 5.8594% ( 55) 00:08:05.794 13006.375 - 13107.200: 6.5848% ( 52) 00:08:05.794 13107.200 - 13208.025: 7.4498% ( 62) 00:08:05.794 13208.025 - 13308.849: 8.3287% ( 63) 00:08:05.794 13308.849 - 13409.674: 9.1518% ( 59) 00:08:05.794 13409.674 - 13510.498: 10.0446% ( 64) 00:08:05.794 13510.498 - 13611.323: 11.0491% ( 72) 00:08:05.794 13611.323 - 13712.148: 12.2070% ( 83) 00:08:05.794 13712.148 - 13812.972: 13.4766% ( 91) 00:08:05.794 13812.972 - 13913.797: 14.7879% ( 94) 00:08:05.794 13913.797 - 14014.622: 16.1691% ( 99) 00:08:05.794 14014.622 - 14115.446: 17.3410% ( 84) 00:08:05.794 14115.446 - 14216.271: 18.4431% ( 79) 00:08:05.794 14216.271 - 14317.095: 19.5173% ( 77) 00:08:05.794 14317.095 - 14417.920: 20.4660% ( 68) 00:08:05.794 14417.920 - 14518.745: 21.4425% ( 70) 00:08:05.794 14518.745 - 14619.569: 22.4609% ( 73) 00:08:05.794 14619.569 - 14720.394: 23.4375% ( 70) 00:08:05.794 14720.394 - 14821.218: 24.3304% ( 64) 00:08:05.794 14821.218 - 14922.043: 25.0558% ( 52) 00:08:05.794 14922.043 - 15022.868: 25.7673% ( 51) 00:08:05.794 15022.868 - 15123.692: 26.5904% ( 59) 00:08:05.794 15123.692 - 15224.517: 27.3298% ( 53) 00:08:05.794 15224.517 - 15325.342: 28.0971% ( 55) 00:08:05.794 15325.342 - 15426.166: 28.9621% ( 62) 00:08:05.794 15426.166 - 15526.991: 29.7712% ( 58) 00:08:05.794 15526.991 - 15627.815: 30.6501% ( 63) 00:08:05.794 15627.815 - 15728.640: 31.3895% ( 53) 00:08:05.794 15728.640 - 15829.465: 32.1708% ( 56) 00:08:05.794 15829.465 - 15930.289: 32.9241% ( 54) 00:08:05.794 15930.289 - 16031.114: 33.5379% ( 44) 00:08:05.794 16031.114 - 16131.938: 34.3052% ( 55) 00:08:05.794 16131.938 - 16232.763: 35.0865% ( 56) 00:08:05.794 16232.763 - 16333.588: 35.8677% ( 56) 00:08:05.794 16333.588 - 16434.412: 36.7048% ( 60) 00:08:05.794 16434.412 - 16535.237: 37.5419% ( 60) 00:08:05.794 16535.237 - 16636.062: 38.4487% ( 65) 00:08:05.794 16636.062 - 16736.886: 39.3555% ( 65) 00:08:05.794 16736.886 - 16837.711: 40.4436% ( 78) 00:08:05.794 16837.711 - 16938.535: 41.4062% ( 69) 00:08:05.795 16938.535 - 17039.360: 42.3410% ( 67) 00:08:05.795 17039.360 - 17140.185: 43.2757% ( 67) 00:08:05.795 17140.185 - 17241.009: 44.2662% ( 71) 00:08:05.795 17241.009 - 17341.834: 45.2148% ( 68) 00:08:05.795 17341.834 - 17442.658: 46.3030% ( 78) 00:08:05.795 17442.658 - 17543.483: 47.3354% ( 74) 00:08:05.795 17543.483 - 17644.308: 48.4235% ( 78) 00:08:05.795 17644.308 - 17745.132: 49.4699% ( 75) 00:08:05.795 17745.132 - 17845.957: 50.5022% ( 74) 00:08:05.795 17845.957 - 17946.782: 51.5206% ( 73) 00:08:05.795 17946.782 - 18047.606: 52.7762% ( 90) 00:08:05.795 18047.606 - 18148.431: 53.9342% ( 83) 00:08:05.795 18148.431 - 18249.255: 55.3292% ( 100) 00:08:05.795 18249.255 - 18350.080: 56.5709% ( 89) 00:08:05.795 18350.080 - 18450.905: 57.7846% ( 87) 00:08:05.795 18450.905 - 18551.729: 58.9704% ( 85) 00:08:05.795 18551.729 - 18652.554: 60.0865% ( 80) 00:08:05.795 18652.554 - 18753.378: 61.3839% ( 93) 00:08:05.795 18753.378 - 18854.203: 62.5837% ( 86) 00:08:05.795 18854.203 - 18955.028: 63.8393% ( 90) 00:08:05.795 18955.028 - 19055.852: 65.1646% ( 95) 00:08:05.795 19055.852 - 19156.677: 66.3225% ( 83) 00:08:05.795 19156.677 - 19257.502: 67.5223% ( 86) 00:08:05.795 19257.502 - 19358.326: 68.6105% ( 78) 00:08:05.795 19358.326 - 19459.151: 69.7405% ( 81) 00:08:05.795 19459.151 - 19559.975: 70.8566% ( 80) 00:08:05.795 19559.975 - 19660.800: 71.8890% ( 74) 00:08:05.795 19660.800 - 19761.625: 72.9353% ( 75) 00:08:05.795 19761.625 - 19862.449: 73.9955% ( 76) 00:08:05.795 19862.449 - 19963.274: 75.0977% ( 79) 00:08:05.795 19963.274 - 20064.098: 76.1021% ( 72) 00:08:05.795 20064.098 - 20164.923: 77.1066% ( 72) 00:08:05.795 20164.923 - 20265.748: 77.9018% ( 57) 00:08:05.795 20265.748 - 20366.572: 78.7388% ( 60) 00:08:05.795 20366.572 - 20467.397: 79.6177% ( 63) 00:08:05.795 20467.397 - 20568.222: 80.5246% ( 65) 00:08:05.795 20568.222 - 20669.046: 81.3895% ( 62) 00:08:05.795 20669.046 - 20769.871: 82.2126% ( 59) 00:08:05.795 20769.871 - 20870.695: 83.0776% ( 62) 00:08:05.795 20870.695 - 20971.520: 83.8728% ( 57) 00:08:05.795 20971.520 - 21072.345: 84.6122% ( 53) 00:08:05.795 21072.345 - 21173.169: 85.3516% ( 53) 00:08:05.795 21173.169 - 21273.994: 86.0910% ( 53) 00:08:05.795 21273.994 - 21374.818: 86.7746% ( 49) 00:08:05.795 21374.818 - 21475.643: 87.3884% ( 44) 00:08:05.795 21475.643 - 21576.468: 88.0162% ( 45) 00:08:05.795 21576.468 - 21677.292: 88.6719% ( 47) 00:08:05.795 21677.292 - 21778.117: 89.2160% ( 39) 00:08:05.795 21778.117 - 21878.942: 89.8158% ( 43) 00:08:05.795 21878.942 - 21979.766: 90.3460% ( 38) 00:08:05.795 21979.766 - 22080.591: 90.9738% ( 45) 00:08:05.795 22080.591 - 22181.415: 91.5039% ( 38) 00:08:05.795 22181.415 - 22282.240: 92.0619% ( 40) 00:08:05.795 22282.240 - 22383.065: 92.6060% ( 39) 00:08:05.795 22383.065 - 22483.889: 93.0943% ( 35) 00:08:05.795 22483.889 - 22584.714: 93.5547% ( 33) 00:08:05.795 22584.714 - 22685.538: 93.9872% ( 31) 00:08:05.795 22685.538 - 22786.363: 94.4196% ( 31) 00:08:05.795 22786.363 - 22887.188: 94.9219% ( 36) 00:08:05.795 22887.188 - 22988.012: 95.3544% ( 31) 00:08:05.795 22988.012 - 23088.837: 95.7310% ( 27) 00:08:05.795 23088.837 - 23189.662: 96.0938% ( 26) 00:08:05.795 23189.662 - 23290.486: 96.4425% ( 25) 00:08:05.795 23290.486 - 23391.311: 96.7355% ( 21) 00:08:05.795 23391.311 - 23492.135: 96.9727% ( 17) 00:08:05.795 23492.135 - 23592.960: 97.1540% ( 13) 00:08:05.795 23592.960 - 23693.785: 97.3075% ( 11) 00:08:05.795 23693.785 - 23794.609: 97.4609% ( 11) 00:08:05.795 23794.609 - 23895.434: 97.6283% ( 12) 00:08:05.795 23895.434 - 23996.258: 97.8097% ( 13) 00:08:05.795 23996.258 - 24097.083: 97.9771% ( 12) 00:08:05.795 24097.083 - 24197.908: 98.1166% ( 10) 00:08:05.795 24197.908 - 24298.732: 98.2003% ( 6) 00:08:05.795 24298.732 - 24399.557: 98.2143% ( 1) 00:08:05.795 28029.243 - 28230.892: 98.2282% ( 1) 00:08:05.795 28230.892 - 28432.542: 98.2980% ( 5) 00:08:05.795 28432.542 - 28634.191: 98.3538% ( 4) 00:08:05.795 28634.191 - 28835.840: 98.4096% ( 4) 00:08:05.795 28835.840 - 29037.489: 98.4794% ( 5) 00:08:05.795 29037.489 - 29239.138: 98.5491% ( 5) 00:08:05.795 29239.138 - 29440.788: 98.6189% ( 5) 00:08:05.795 29440.788 - 29642.437: 98.6886% ( 5) 00:08:05.795 29642.437 - 29844.086: 98.7584% ( 5) 00:08:05.795 29844.086 - 30045.735: 98.8281% ( 5) 00:08:05.795 30045.735 - 30247.385: 98.8979% ( 5) 00:08:05.795 30247.385 - 30449.034: 98.9676% ( 5) 00:08:05.795 30449.034 - 30650.683: 99.0234% ( 4) 00:08:05.795 30650.683 - 30852.332: 99.0932% ( 5) 00:08:05.795 30852.332 - 31053.982: 99.1071% ( 1) 00:08:05.795 37103.458 - 37305.108: 99.1490% ( 3) 00:08:05.795 37305.108 - 37506.757: 99.2188% ( 5) 00:08:05.795 37506.757 - 37708.406: 99.2746% ( 4) 00:08:05.795 37708.406 - 37910.055: 99.3164% ( 3) 00:08:05.795 37910.055 - 38111.705: 99.3862% ( 5) 00:08:05.795 38111.705 - 38313.354: 99.4559% ( 5) 00:08:05.795 38313.354 - 38515.003: 99.5117% ( 4) 00:08:05.795 38515.003 - 38716.652: 99.5815% ( 5) 00:08:05.795 38716.652 - 38918.302: 99.6512% ( 5) 00:08:05.795 38918.302 - 39119.951: 99.7210% ( 5) 00:08:05.795 39119.951 - 39321.600: 99.7907% ( 5) 00:08:05.795 39321.600 - 39523.249: 99.8605% ( 5) 00:08:05.795 39523.249 - 39724.898: 99.9302% ( 5) 00:08:05.795 39724.898 - 39926.548: 99.9860% ( 4) 00:08:05.795 39926.548 - 40128.197: 100.0000% ( 1) 00:08:05.795 00:08:05.795 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:05.795 ============================================================================== 00:08:05.795 Range in us Cumulative IO count 00:08:05.795 11494.006 - 11544.418: 0.0415% ( 3) 00:08:05.795 11544.418 - 11594.831: 0.0691% ( 2) 00:08:05.795 11594.831 - 11645.243: 0.0968% ( 2) 00:08:05.795 11645.243 - 11695.655: 0.1383% ( 3) 00:08:05.795 11695.655 - 11746.068: 0.2074% ( 5) 00:08:05.795 11746.068 - 11796.480: 0.2765% ( 5) 00:08:05.795 11796.480 - 11846.892: 0.4010% ( 9) 00:08:05.795 11846.892 - 11897.305: 0.4840% ( 6) 00:08:05.795 11897.305 - 11947.717: 0.5808% ( 7) 00:08:05.795 11947.717 - 11998.129: 0.6637% ( 6) 00:08:05.795 11998.129 - 12048.542: 0.7605% ( 7) 00:08:05.795 12048.542 - 12098.954: 0.9264% ( 12) 00:08:05.795 12098.954 - 12149.366: 1.1062% ( 13) 00:08:05.795 12149.366 - 12199.778: 1.2721% ( 12) 00:08:05.795 12199.778 - 12250.191: 1.4657% ( 14) 00:08:05.795 12250.191 - 12300.603: 1.6731% ( 15) 00:08:05.795 12300.603 - 12351.015: 1.8529% ( 13) 00:08:05.795 12351.015 - 12401.428: 2.0465% ( 14) 00:08:05.795 12401.428 - 12451.840: 2.2539% ( 15) 00:08:05.795 12451.840 - 12502.252: 2.4613% ( 15) 00:08:05.795 12502.252 - 12552.665: 2.7240% ( 19) 00:08:05.795 12552.665 - 12603.077: 2.9591% ( 17) 00:08:05.795 12603.077 - 12653.489: 3.2356% ( 20) 00:08:05.795 12653.489 - 12703.902: 3.4983% ( 19) 00:08:05.796 12703.902 - 12754.314: 3.7472% ( 18) 00:08:05.796 12754.314 - 12804.726: 4.0238% ( 20) 00:08:05.796 12804.726 - 12855.138: 4.3003% ( 20) 00:08:05.796 12855.138 - 12905.551: 4.6045% ( 22) 00:08:05.796 12905.551 - 13006.375: 5.2683% ( 48) 00:08:05.796 13006.375 - 13107.200: 6.0011% ( 53) 00:08:05.796 13107.200 - 13208.025: 6.7754% ( 56) 00:08:05.796 13208.025 - 13308.849: 7.7157% ( 68) 00:08:05.796 13308.849 - 13409.674: 8.7666% ( 76) 00:08:05.796 13409.674 - 13510.498: 9.7345% ( 70) 00:08:05.796 13510.498 - 13611.323: 10.8269% ( 79) 00:08:05.796 13611.323 - 13712.148: 11.9746% ( 83) 00:08:05.796 13712.148 - 13812.972: 13.1914% ( 88) 00:08:05.796 13812.972 - 13913.797: 14.4773% ( 93) 00:08:05.796 13913.797 - 14014.622: 15.8324% ( 98) 00:08:05.796 14014.622 - 14115.446: 17.2428% ( 102) 00:08:05.796 14115.446 - 14216.271: 18.5149% ( 92) 00:08:05.796 14216.271 - 14317.095: 19.7179% ( 87) 00:08:05.796 14317.095 - 14417.920: 20.9624% ( 90) 00:08:05.796 14417.920 - 14518.745: 22.2483% ( 93) 00:08:05.796 14518.745 - 14619.569: 23.3407% ( 79) 00:08:05.796 14619.569 - 14720.394: 24.4054% ( 77) 00:08:05.796 14720.394 - 14821.218: 25.5116% ( 80) 00:08:05.796 14821.218 - 14922.043: 26.6040% ( 79) 00:08:05.796 14922.043 - 15022.868: 27.5719% ( 70) 00:08:05.796 15022.868 - 15123.692: 28.3877% ( 59) 00:08:05.796 15123.692 - 15224.517: 29.2035% ( 59) 00:08:05.796 15224.517 - 15325.342: 30.0885% ( 64) 00:08:05.796 15325.342 - 15426.166: 31.0149% ( 67) 00:08:05.796 15426.166 - 15526.991: 31.6510% ( 46) 00:08:05.796 15526.991 - 15627.815: 32.3838% ( 53) 00:08:05.796 15627.815 - 15728.640: 33.0337% ( 47) 00:08:05.796 15728.640 - 15829.465: 33.6007% ( 41) 00:08:05.796 15829.465 - 15930.289: 34.2229% ( 45) 00:08:05.796 15930.289 - 16031.114: 34.8866% ( 48) 00:08:05.796 16031.114 - 16131.938: 35.4674% ( 42) 00:08:05.796 16131.938 - 16232.763: 36.0343% ( 41) 00:08:05.796 16232.763 - 16333.588: 36.5874% ( 40) 00:08:05.796 16333.588 - 16434.412: 37.1128% ( 38) 00:08:05.796 16434.412 - 16535.237: 37.7489% ( 46) 00:08:05.796 16535.237 - 16636.062: 38.4126% ( 48) 00:08:05.796 16636.062 - 16736.886: 39.1731% ( 55) 00:08:05.796 16736.886 - 16837.711: 39.9198% ( 54) 00:08:05.796 16837.711 - 16938.535: 40.8324% ( 66) 00:08:05.796 16938.535 - 17039.360: 41.8556% ( 74) 00:08:05.796 17039.360 - 17140.185: 42.8374% ( 71) 00:08:05.796 17140.185 - 17241.009: 43.8330% ( 72) 00:08:05.796 17241.009 - 17341.834: 44.8977% ( 77) 00:08:05.796 17341.834 - 17442.658: 46.0039% ( 80) 00:08:05.796 17442.658 - 17543.483: 47.1792% ( 85) 00:08:05.796 17543.483 - 17644.308: 48.2992% ( 81) 00:08:05.796 17644.308 - 17745.132: 49.3501% ( 76) 00:08:05.796 17745.132 - 17845.957: 50.4840% ( 82) 00:08:05.796 17845.957 - 17946.782: 51.6178% ( 82) 00:08:05.796 17946.782 - 18047.606: 52.8070% ( 86) 00:08:05.796 18047.606 - 18148.431: 53.9408% ( 82) 00:08:05.796 18148.431 - 18249.255: 55.1300% ( 86) 00:08:05.796 18249.255 - 18350.080: 56.4436% ( 95) 00:08:05.796 18350.080 - 18450.905: 57.6189% ( 85) 00:08:05.796 18450.905 - 18551.729: 58.8496% ( 89) 00:08:05.796 18551.729 - 18652.554: 60.0940% ( 90) 00:08:05.796 18652.554 - 18753.378: 61.2417% ( 83) 00:08:05.796 18753.378 - 18854.203: 62.5000% ( 91) 00:08:05.796 18854.203 - 18955.028: 63.7306% ( 89) 00:08:05.796 18955.028 - 19055.852: 64.8783% ( 83) 00:08:05.796 19055.852 - 19156.677: 65.9983% ( 81) 00:08:05.796 19156.677 - 19257.502: 67.2152% ( 88) 00:08:05.796 19257.502 - 19358.326: 68.4596% ( 90) 00:08:05.796 19358.326 - 19459.151: 69.4967% ( 75) 00:08:05.796 19459.151 - 19559.975: 70.6582% ( 84) 00:08:05.796 19559.975 - 19660.800: 71.6538% ( 72) 00:08:05.796 19660.800 - 19761.625: 72.7185% ( 77) 00:08:05.796 19761.625 - 19862.449: 73.7832% ( 77) 00:08:05.796 19862.449 - 19963.274: 74.8341% ( 76) 00:08:05.796 19963.274 - 20064.098: 75.8435% ( 73) 00:08:05.796 20064.098 - 20164.923: 76.9635% ( 81) 00:08:05.796 20164.923 - 20265.748: 77.9867% ( 74) 00:08:05.796 20265.748 - 20366.572: 78.9408% ( 69) 00:08:05.796 20366.572 - 20467.397: 79.8949% ( 69) 00:08:05.796 20467.397 - 20568.222: 80.8075% ( 66) 00:08:05.796 20568.222 - 20669.046: 81.7478% ( 68) 00:08:05.796 20669.046 - 20769.871: 82.6604% ( 66) 00:08:05.796 20769.871 - 20870.695: 83.5315% ( 63) 00:08:05.796 20870.695 - 20971.520: 84.3612% ( 60) 00:08:05.796 20971.520 - 21072.345: 85.2185% ( 62) 00:08:05.796 21072.345 - 21173.169: 85.9098% ( 50) 00:08:05.796 21173.169 - 21273.994: 86.6704% ( 55) 00:08:05.796 21273.994 - 21374.818: 87.3894% ( 52) 00:08:05.796 21374.818 - 21475.643: 88.1222% ( 53) 00:08:05.796 21475.643 - 21576.468: 88.8827% ( 55) 00:08:05.796 21576.468 - 21677.292: 89.6709% ( 57) 00:08:05.796 21677.292 - 21778.117: 90.4591% ( 57) 00:08:05.796 21778.117 - 21878.942: 91.1781% ( 52) 00:08:05.796 21878.942 - 21979.766: 91.8695% ( 50) 00:08:05.796 21979.766 - 22080.591: 92.5885% ( 52) 00:08:05.796 22080.591 - 22181.415: 93.2522% ( 48) 00:08:05.796 22181.415 - 22282.240: 93.8330% ( 42) 00:08:05.796 22282.240 - 22383.065: 94.4137% ( 42) 00:08:05.796 22383.065 - 22483.889: 94.9530% ( 39) 00:08:05.796 22483.889 - 22584.714: 95.4508% ( 36) 00:08:05.796 22584.714 - 22685.538: 95.9762% ( 38) 00:08:05.796 22685.538 - 22786.363: 96.4740% ( 36) 00:08:05.796 22786.363 - 22887.188: 96.9856% ( 37) 00:08:05.796 22887.188 - 22988.012: 97.3451% ( 26) 00:08:05.796 22988.012 - 23088.837: 97.6355% ( 21) 00:08:05.796 23088.837 - 23189.662: 97.8706% ( 17) 00:08:05.796 23189.662 - 23290.486: 98.0503% ( 13) 00:08:05.796 23290.486 - 23391.311: 98.2024% ( 11) 00:08:05.796 23391.311 - 23492.135: 98.3545% ( 11) 00:08:05.796 23492.135 - 23592.960: 98.4790% ( 9) 00:08:05.796 23592.960 - 23693.785: 98.5619% ( 6) 00:08:05.796 23693.785 - 23794.609: 98.6726% ( 8) 00:08:05.796 23794.609 - 23895.434: 98.7694% ( 7) 00:08:05.796 23895.434 - 23996.258: 98.8662% ( 7) 00:08:05.796 23996.258 - 24097.083: 98.9215% ( 4) 00:08:05.796 24097.083 - 24197.908: 98.9491% ( 2) 00:08:05.796 24197.908 - 24298.732: 98.9629% ( 1) 00:08:05.796 24298.732 - 24399.557: 98.9906% ( 2) 00:08:05.796 24399.557 - 24500.382: 99.0183% ( 2) 00:08:05.796 24500.382 - 24601.206: 99.0459% ( 2) 00:08:05.796 24601.206 - 24702.031: 99.0597% ( 1) 00:08:05.796 24702.031 - 24802.855: 99.0874% ( 2) 00:08:05.796 24802.855 - 24903.680: 99.1150% ( 2) 00:08:05.796 27222.646 - 27424.295: 99.1427% ( 2) 00:08:05.796 27424.295 - 27625.945: 99.2118% ( 5) 00:08:05.796 27625.945 - 27827.594: 99.2810% ( 5) 00:08:05.796 27827.594 - 28029.243: 99.3501% ( 5) 00:08:05.796 28029.243 - 28230.892: 99.4054% ( 4) 00:08:05.796 28230.892 - 28432.542: 99.4746% ( 5) 00:08:05.796 28432.542 - 28634.191: 99.5437% ( 5) 00:08:05.796 28634.191 - 28835.840: 99.6128% ( 5) 00:08:05.796 28835.840 - 29037.489: 99.6681% ( 4) 00:08:05.796 29037.489 - 29239.138: 99.7373% ( 5) 00:08:05.796 29239.138 - 29440.788: 99.8064% ( 5) 00:08:05.796 29440.788 - 29642.437: 99.8617% ( 4) 00:08:05.796 29642.437 - 29844.086: 99.9309% ( 5) 00:08:05.796 29844.086 - 30045.735: 100.0000% ( 5) 00:08:05.796 00:08:05.796 10:08:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:07.196 Initializing NVMe Controllers 00:08:07.196 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.196 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.196 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.196 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.196 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:07.196 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:07.196 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:07.196 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:07.196 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:07.196 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:07.196 Initialization complete. Launching workers. 00:08:07.196 ======================================================== 00:08:07.196 Latency(us) 00:08:07.196 Device Information : IOPS MiB/s Average min max 00:08:07.196 PCIE (0000:00:10.0) NSID 1 from core 0: 7268.63 85.18 17631.58 11808.01 59548.40 00:08:07.196 PCIE (0000:00:11.0) NSID 1 from core 0: 7268.63 85.18 17566.16 12105.64 56174.34 00:08:07.196 PCIE (0000:00:13.0) NSID 1 from core 0: 7268.63 85.18 17502.22 11977.93 53758.05 00:08:07.196 PCIE (0000:00:12.0) NSID 1 from core 0: 7268.63 85.18 17438.60 11953.10 50970.13 00:08:07.196 PCIE (0000:00:12.0) NSID 2 from core 0: 7268.63 85.18 17374.85 12053.31 48060.85 00:08:07.196 PCIE (0000:00:12.0) NSID 3 from core 0: 7332.39 85.93 17160.93 11825.42 33234.71 00:08:07.196 ======================================================== 00:08:07.196 Total : 43675.56 511.82 17445.31 11808.01 59548.40 00:08:07.196 00:08:07.196 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:07.196 ================================================================================= 00:08:07.196 1.00000% : 12300.603us 00:08:07.196 10.00000% : 13208.025us 00:08:07.196 25.00000% : 14115.446us 00:08:07.196 50.00000% : 15829.465us 00:08:07.196 75.00000% : 20769.871us 00:08:07.196 90.00000% : 23492.135us 00:08:07.196 95.00000% : 24601.206us 00:08:07.196 98.00000% : 26012.751us 00:08:07.196 99.00000% : 45169.428us 00:08:07.196 99.50000% : 56865.083us 00:08:07.196 99.90000% : 59284.874us 00:08:07.196 99.99000% : 59688.172us 00:08:07.196 99.99900% : 59688.172us 00:08:07.196 99.99990% : 59688.172us 00:08:07.196 99.99999% : 59688.172us 00:08:07.196 00:08:07.196 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:07.196 ================================================================================= 00:08:07.196 1.00000% : 12451.840us 00:08:07.196 10.00000% : 13308.849us 00:08:07.196 25.00000% : 14115.446us 00:08:07.196 50.00000% : 15829.465us 00:08:07.196 75.00000% : 20769.871us 00:08:07.196 90.00000% : 23391.311us 00:08:07.196 95.00000% : 24399.557us 00:08:07.196 98.00000% : 25609.452us 00:08:07.196 99.00000% : 42144.689us 00:08:07.196 99.50000% : 54041.994us 00:08:07.196 99.90000% : 56058.486us 00:08:07.196 99.99000% : 56461.785us 00:08:07.196 99.99900% : 56461.785us 00:08:07.196 99.99990% : 56461.785us 00:08:07.196 99.99999% : 56461.785us 00:08:07.196 00:08:07.196 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:07.197 ================================================================================= 00:08:07.197 1.00000% : 12351.015us 00:08:07.197 10.00000% : 13208.025us 00:08:07.197 25.00000% : 14115.446us 00:08:07.197 50.00000% : 15829.465us 00:08:07.197 75.00000% : 20870.695us 00:08:07.197 90.00000% : 23290.486us 00:08:07.197 95.00000% : 24298.732us 00:08:07.197 98.00000% : 25508.628us 00:08:07.197 99.00000% : 39119.951us 00:08:07.197 99.50000% : 51622.203us 00:08:07.197 99.90000% : 53638.695us 00:08:07.197 99.99000% : 54041.994us 00:08:07.197 99.99900% : 54041.994us 00:08:07.197 99.99990% : 54041.994us 00:08:07.197 99.99999% : 54041.994us 00:08:07.197 00:08:07.197 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:07.197 ================================================================================= 00:08:07.197 1.00000% : 12351.015us 00:08:07.197 10.00000% : 13107.200us 00:08:07.197 25.00000% : 14014.622us 00:08:07.197 50.00000% : 15829.465us 00:08:07.197 75.00000% : 20769.871us 00:08:07.197 90.00000% : 23189.662us 00:08:07.197 95.00000% : 24298.732us 00:08:07.197 98.00000% : 25407.803us 00:08:07.197 99.00000% : 36296.862us 00:08:07.197 99.50000% : 48597.465us 00:08:07.197 99.90000% : 50613.957us 00:08:07.197 99.99000% : 51017.255us 00:08:07.197 99.99900% : 51017.255us 00:08:07.197 99.99990% : 51017.255us 00:08:07.197 99.99999% : 51017.255us 00:08:07.197 00:08:07.197 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:07.197 ================================================================================= 00:08:07.197 1.00000% : 12401.428us 00:08:07.197 10.00000% : 13208.025us 00:08:07.197 25.00000% : 14115.446us 00:08:07.197 50.00000% : 15829.465us 00:08:07.197 75.00000% : 20669.046us 00:08:07.197 90.00000% : 23290.486us 00:08:07.197 95.00000% : 24298.732us 00:08:07.197 98.00000% : 25508.628us 00:08:07.197 99.00000% : 33473.772us 00:08:07.197 99.50000% : 45976.025us 00:08:07.197 99.90000% : 47790.868us 00:08:07.197 99.99000% : 48194.166us 00:08:07.197 99.99900% : 48194.166us 00:08:07.197 99.99990% : 48194.166us 00:08:07.197 99.99999% : 48194.166us 00:08:07.197 00:08:07.197 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:07.197 ================================================================================= 00:08:07.197 1.00000% : 12401.428us 00:08:07.197 10.00000% : 13208.025us 00:08:07.197 25.00000% : 14115.446us 00:08:07.197 50.00000% : 15930.289us 00:08:07.197 75.00000% : 20568.222us 00:08:07.197 90.00000% : 23189.662us 00:08:07.197 95.00000% : 24097.083us 00:08:07.197 98.00000% : 24903.680us 00:08:07.197 99.00000% : 25609.452us 00:08:07.197 99.50000% : 31053.982us 00:08:07.197 99.90000% : 32868.825us 00:08:07.197 99.99000% : 33272.123us 00:08:07.197 99.99900% : 33272.123us 00:08:07.197 99.99990% : 33272.123us 00:08:07.197 99.99999% : 33272.123us 00:08:07.197 00:08:07.197 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:07.197 ============================================================================== 00:08:07.197 Range in us Cumulative IO count 00:08:07.197 11796.480 - 11846.892: 0.0137% ( 1) 00:08:07.197 11897.305 - 11947.717: 0.0411% ( 2) 00:08:07.197 11947.717 - 11998.129: 0.1371% ( 7) 00:08:07.197 11998.129 - 12048.542: 0.3015% ( 12) 00:08:07.197 12048.542 - 12098.954: 0.3838% ( 6) 00:08:07.197 12098.954 - 12149.366: 0.5757% ( 14) 00:08:07.197 12149.366 - 12199.778: 0.7401% ( 12) 00:08:07.197 12199.778 - 12250.191: 0.9183% ( 13) 00:08:07.197 12250.191 - 12300.603: 1.2061% ( 21) 00:08:07.197 12300.603 - 12351.015: 1.5351% ( 24) 00:08:07.197 12351.015 - 12401.428: 1.9874% ( 33) 00:08:07.197 12401.428 - 12451.840: 2.6727% ( 50) 00:08:07.197 12451.840 - 12502.252: 3.1524% ( 35) 00:08:07.197 12502.252 - 12552.665: 3.6732% ( 38) 00:08:07.197 12552.665 - 12603.077: 4.0296% ( 26) 00:08:07.197 12603.077 - 12653.489: 4.3174% ( 21) 00:08:07.197 12653.489 - 12703.902: 4.6464% ( 24) 00:08:07.197 12703.902 - 12754.314: 4.9479% ( 22) 00:08:07.197 12754.314 - 12804.726: 5.4825% ( 39) 00:08:07.197 12804.726 - 12855.138: 6.1266% ( 47) 00:08:07.197 12855.138 - 12905.551: 6.8942% ( 56) 00:08:07.197 12905.551 - 13006.375: 8.3607% ( 107) 00:08:07.197 13006.375 - 13107.200: 9.6217% ( 92) 00:08:07.197 13107.200 - 13208.025: 10.8141% ( 87) 00:08:07.197 13208.025 - 13308.849: 12.0203% ( 88) 00:08:07.197 13308.849 - 13409.674: 13.4183% ( 102) 00:08:07.197 13409.674 - 13510.498: 14.8712% ( 106) 00:08:07.197 13510.498 - 13611.323: 16.3925% ( 111) 00:08:07.197 13611.323 - 13712.148: 18.6678% ( 166) 00:08:07.197 13712.148 - 13812.972: 20.6689% ( 146) 00:08:07.197 13812.972 - 13913.797: 22.8344% ( 158) 00:08:07.197 13913.797 - 14014.622: 24.7807% ( 142) 00:08:07.197 14014.622 - 14115.446: 26.3021% ( 111) 00:08:07.197 14115.446 - 14216.271: 27.8235% ( 111) 00:08:07.197 14216.271 - 14317.095: 29.2215% ( 102) 00:08:07.197 14317.095 - 14417.920: 30.8799% ( 121) 00:08:07.197 14417.920 - 14518.745: 32.4836% ( 117) 00:08:07.197 14518.745 - 14619.569: 34.0598% ( 115) 00:08:07.197 14619.569 - 14720.394: 35.6497% ( 116) 00:08:07.197 14720.394 - 14821.218: 37.1162% ( 107) 00:08:07.197 14821.218 - 14922.043: 38.4320% ( 96) 00:08:07.197 14922.043 - 15022.868: 39.7889% ( 99) 00:08:07.197 15022.868 - 15123.692: 41.1458% ( 99) 00:08:07.197 15123.692 - 15224.517: 42.5302% ( 101) 00:08:07.197 15224.517 - 15325.342: 43.7911% ( 92) 00:08:07.197 15325.342 - 15426.166: 45.0932% ( 95) 00:08:07.197 15426.166 - 15526.991: 46.4912% ( 102) 00:08:07.197 15526.991 - 15627.815: 47.8618% ( 100) 00:08:07.197 15627.815 - 15728.640: 49.1365% ( 93) 00:08:07.197 15728.640 - 15829.465: 50.2741% ( 83) 00:08:07.197 15829.465 - 15930.289: 51.3706% ( 80) 00:08:07.197 15930.289 - 16031.114: 52.4671% ( 80) 00:08:07.197 16031.114 - 16131.938: 53.4128% ( 69) 00:08:07.197 16131.938 - 16232.763: 54.4408% ( 75) 00:08:07.197 16232.763 - 16333.588: 55.1124% ( 49) 00:08:07.197 16333.588 - 16434.412: 56.3048% ( 87) 00:08:07.197 16434.412 - 16535.237: 57.2643% ( 70) 00:08:07.197 16535.237 - 16636.062: 58.2648% ( 73) 00:08:07.197 16636.062 - 16736.886: 59.0323% ( 56) 00:08:07.197 16736.886 - 16837.711: 59.8958% ( 63) 00:08:07.197 16837.711 - 16938.535: 60.7182% ( 60) 00:08:07.197 16938.535 - 17039.360: 61.7325% ( 74) 00:08:07.197 17039.360 - 17140.185: 62.4452% ( 52) 00:08:07.197 17140.185 - 17241.009: 63.0894% ( 47) 00:08:07.197 17241.009 - 17341.834: 63.7336% ( 47) 00:08:07.197 17341.834 - 17442.658: 64.4874% ( 55) 00:08:07.197 17442.658 - 17543.483: 65.0768% ( 43) 00:08:07.197 17543.483 - 17644.308: 65.6798% ( 44) 00:08:07.197 17644.308 - 17745.132: 66.2007% ( 38) 00:08:07.197 17745.132 - 17845.957: 66.5844% ( 28) 00:08:07.197 17845.957 - 17946.782: 67.0641% ( 35) 00:08:07.197 17946.782 - 18047.606: 67.4205% ( 26) 00:08:07.197 18047.606 - 18148.431: 67.8180% ( 29) 00:08:07.197 18148.431 - 18249.255: 68.1743% ( 26) 00:08:07.197 18249.255 - 18350.080: 68.5444% ( 27) 00:08:07.197 18350.080 - 18450.905: 68.9282% ( 28) 00:08:07.197 18450.905 - 18551.729: 69.2434% ( 23) 00:08:07.197 18551.729 - 18652.554: 69.4901% ( 18) 00:08:07.197 18652.554 - 18753.378: 69.7643% ( 20) 00:08:07.197 18753.378 - 18854.203: 70.0247% ( 19) 00:08:07.197 18854.203 - 18955.028: 70.3399% ( 23) 00:08:07.197 18955.028 - 19055.852: 70.5181% ( 13) 00:08:07.197 19055.852 - 19156.677: 70.7237% ( 15) 00:08:07.197 19156.677 - 19257.502: 70.8607% ( 10) 00:08:07.197 19257.502 - 19358.326: 71.0115% ( 11) 00:08:07.197 19358.326 - 19459.151: 71.3405% ( 24) 00:08:07.197 19459.151 - 19559.975: 71.6283% ( 21) 00:08:07.197 19559.975 - 19660.800: 71.9572% ( 24) 00:08:07.197 19660.800 - 19761.625: 72.1902% ( 17) 00:08:07.197 19761.625 - 19862.449: 72.5603% ( 27) 00:08:07.197 19862.449 - 19963.274: 72.8481% ( 21) 00:08:07.197 19963.274 - 20064.098: 73.1908% ( 25) 00:08:07.197 20064.098 - 20164.923: 73.4512% ( 19) 00:08:07.197 20164.923 - 20265.748: 73.6979% ( 18) 00:08:07.197 20265.748 - 20366.572: 73.9857% ( 21) 00:08:07.197 20366.572 - 20467.397: 74.2188% ( 17) 00:08:07.197 20467.397 - 20568.222: 74.5888% ( 27) 00:08:07.197 20568.222 - 20669.046: 74.8904% ( 22) 00:08:07.197 20669.046 - 20769.871: 75.2741% ( 28) 00:08:07.197 20769.871 - 20870.695: 75.6579% ( 28) 00:08:07.197 20870.695 - 20971.520: 76.0554% ( 29) 00:08:07.197 20971.520 - 21072.345: 76.5488% ( 36) 00:08:07.197 21072.345 - 21173.169: 77.0422% ( 36) 00:08:07.197 21173.169 - 21273.994: 77.5768% ( 39) 00:08:07.197 21273.994 - 21374.818: 77.9331% ( 26) 00:08:07.197 21374.818 - 21475.643: 78.5910% ( 48) 00:08:07.197 21475.643 - 21576.468: 79.2626% ( 49) 00:08:07.197 21576.468 - 21677.292: 79.8794% ( 45) 00:08:07.197 21677.292 - 21778.117: 80.3317% ( 33) 00:08:07.197 21778.117 - 21878.942: 80.9485% ( 45) 00:08:07.197 21878.942 - 21979.766: 81.5241% ( 42) 00:08:07.197 21979.766 - 22080.591: 82.2368% ( 52) 00:08:07.197 22080.591 - 22181.415: 82.9359% ( 51) 00:08:07.197 22181.415 - 22282.240: 83.6349% ( 51) 00:08:07.197 22282.240 - 22383.065: 84.2105% ( 42) 00:08:07.197 22383.065 - 22483.889: 84.7177% ( 37) 00:08:07.197 22483.889 - 22584.714: 85.2933% ( 42) 00:08:07.197 22584.714 - 22685.538: 86.0060% ( 52) 00:08:07.197 22685.538 - 22786.363: 86.5543% ( 40) 00:08:07.197 22786.363 - 22887.188: 87.0888% ( 39) 00:08:07.197 22887.188 - 22988.012: 87.6096% ( 38) 00:08:07.197 22988.012 - 23088.837: 88.1579% ( 40) 00:08:07.197 23088.837 - 23189.662: 88.7061% ( 40) 00:08:07.197 23189.662 - 23290.486: 89.3229% ( 45) 00:08:07.197 23290.486 - 23391.311: 89.8026% ( 35) 00:08:07.197 23391.311 - 23492.135: 90.2961% ( 36) 00:08:07.197 23492.135 - 23592.960: 90.7758% ( 35) 00:08:07.197 23592.960 - 23693.785: 91.2692% ( 36) 00:08:07.197 23693.785 - 23794.609: 91.7352% ( 34) 00:08:07.197 23794.609 - 23895.434: 92.1738% ( 32) 00:08:07.197 23895.434 - 23996.258: 92.6398% ( 34) 00:08:07.198 23996.258 - 24097.083: 93.0647% ( 31) 00:08:07.198 24097.083 - 24197.908: 93.5581% ( 36) 00:08:07.198 24197.908 - 24298.732: 94.1475% ( 43) 00:08:07.198 24298.732 - 24399.557: 94.4353% ( 21) 00:08:07.198 24399.557 - 24500.382: 94.7231% ( 21) 00:08:07.198 24500.382 - 24601.206: 95.1343% ( 30) 00:08:07.198 24601.206 - 24702.031: 95.4496% ( 23) 00:08:07.198 24702.031 - 24802.855: 95.9019% ( 33) 00:08:07.198 24802.855 - 24903.680: 96.1897% ( 21) 00:08:07.198 24903.680 - 25004.505: 96.4912% ( 22) 00:08:07.198 25004.505 - 25105.329: 96.7654% ( 20) 00:08:07.198 25105.329 - 25206.154: 96.9435% ( 13) 00:08:07.198 25206.154 - 25306.978: 97.2314% ( 21) 00:08:07.198 25306.978 - 25407.803: 97.3958% ( 12) 00:08:07.198 25407.803 - 25508.628: 97.5877% ( 14) 00:08:07.198 25508.628 - 25609.452: 97.7385% ( 11) 00:08:07.198 25609.452 - 25710.277: 97.8481% ( 8) 00:08:07.198 25710.277 - 25811.102: 97.9852% ( 10) 00:08:07.198 25811.102 - 26012.751: 98.1634% ( 13) 00:08:07.198 26012.751 - 26214.400: 98.2456% ( 6) 00:08:07.198 41539.742 - 41741.391: 98.3827% ( 10) 00:08:07.198 41741.391 - 41943.040: 98.4238% ( 3) 00:08:07.198 41943.040 - 42144.689: 98.5060% ( 6) 00:08:07.198 42144.689 - 42346.338: 98.5197% ( 1) 00:08:07.198 42346.338 - 42547.988: 98.5471% ( 2) 00:08:07.198 42547.988 - 42749.637: 98.5609% ( 1) 00:08:07.198 42749.637 - 42951.286: 98.6294% ( 5) 00:08:07.198 43152.935 - 43354.585: 98.6979% ( 5) 00:08:07.198 43354.585 - 43556.234: 98.7116% ( 1) 00:08:07.198 43556.234 - 43757.883: 98.7390% ( 2) 00:08:07.198 43757.883 - 43959.532: 98.8076% ( 5) 00:08:07.198 43959.532 - 44161.182: 98.8350% ( 2) 00:08:07.198 44161.182 - 44362.831: 98.8624% ( 2) 00:08:07.198 44362.831 - 44564.480: 98.9309% ( 5) 00:08:07.198 44564.480 - 44766.129: 98.9446% ( 1) 00:08:07.198 44766.129 - 44967.778: 98.9857% ( 3) 00:08:07.198 44967.778 - 45169.428: 99.0406% ( 4) 00:08:07.198 45169.428 - 45371.077: 99.0680% ( 2) 00:08:07.198 45371.077 - 45572.726: 99.1091% ( 3) 00:08:07.198 45572.726 - 45774.375: 99.1228% ( 1) 00:08:07.198 54848.591 - 55251.889: 99.2050% ( 6) 00:08:07.198 55251.889 - 55655.188: 99.2599% ( 4) 00:08:07.198 55655.188 - 56058.486: 99.3558% ( 7) 00:08:07.198 56058.486 - 56461.785: 99.4380% ( 6) 00:08:07.198 56461.785 - 56865.083: 99.5066% ( 5) 00:08:07.198 56865.083 - 57268.382: 99.5751% ( 5) 00:08:07.198 57268.382 - 57671.680: 99.6436% ( 5) 00:08:07.198 57671.680 - 58074.978: 99.7122% ( 5) 00:08:07.198 58074.978 - 58478.277: 99.7944% ( 6) 00:08:07.198 58478.277 - 58881.575: 99.8766% ( 6) 00:08:07.198 58881.575 - 59284.874: 99.9452% ( 5) 00:08:07.198 59284.874 - 59688.172: 100.0000% ( 4) 00:08:07.198 00:08:07.198 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:07.198 ============================================================================== 00:08:07.198 Range in us Cumulative IO count 00:08:07.198 12098.954 - 12149.366: 0.0137% ( 1) 00:08:07.198 12149.366 - 12199.778: 0.0411% ( 2) 00:08:07.198 12199.778 - 12250.191: 0.1645% ( 9) 00:08:07.198 12250.191 - 12300.603: 0.3427% ( 13) 00:08:07.198 12300.603 - 12351.015: 0.5620% ( 16) 00:08:07.198 12351.015 - 12401.428: 0.7401% ( 13) 00:08:07.198 12401.428 - 12451.840: 1.0143% ( 20) 00:08:07.198 12451.840 - 12502.252: 1.3980% ( 28) 00:08:07.198 12502.252 - 12552.665: 1.8366% ( 32) 00:08:07.198 12552.665 - 12603.077: 2.2752% ( 32) 00:08:07.198 12603.077 - 12653.489: 2.6316% ( 26) 00:08:07.198 12653.489 - 12703.902: 3.0565% ( 31) 00:08:07.198 12703.902 - 12754.314: 3.5773% ( 38) 00:08:07.198 12754.314 - 12804.726: 4.1118% ( 39) 00:08:07.198 12804.726 - 12855.138: 4.6464% ( 39) 00:08:07.198 12855.138 - 12905.551: 5.3317% ( 50) 00:08:07.198 12905.551 - 13006.375: 6.6201% ( 94) 00:08:07.198 13006.375 - 13107.200: 7.9770% ( 99) 00:08:07.198 13107.200 - 13208.025: 9.5121% ( 112) 00:08:07.198 13208.025 - 13308.849: 11.2802% ( 129) 00:08:07.198 13308.849 - 13409.674: 12.9934% ( 125) 00:08:07.198 13409.674 - 13510.498: 14.7478% ( 128) 00:08:07.198 13510.498 - 13611.323: 16.9956% ( 164) 00:08:07.198 13611.323 - 13712.148: 19.3805% ( 174) 00:08:07.198 13712.148 - 13812.972: 21.3542% ( 144) 00:08:07.198 13812.972 - 13913.797: 23.0400% ( 123) 00:08:07.198 13913.797 - 14014.622: 24.7122% ( 122) 00:08:07.198 14014.622 - 14115.446: 26.4529% ( 127) 00:08:07.198 14115.446 - 14216.271: 28.2895% ( 134) 00:08:07.198 14216.271 - 14317.095: 30.1672% ( 137) 00:08:07.198 14317.095 - 14417.920: 31.7845% ( 118) 00:08:07.198 14417.920 - 14518.745: 33.3607% ( 115) 00:08:07.198 14518.745 - 14619.569: 35.3893% ( 148) 00:08:07.198 14619.569 - 14720.394: 37.2396% ( 135) 00:08:07.198 14720.394 - 14821.218: 38.9391% ( 124) 00:08:07.198 14821.218 - 14922.043: 40.3098% ( 100) 00:08:07.198 14922.043 - 15022.868: 41.6804% ( 100) 00:08:07.198 15022.868 - 15123.692: 42.8865% ( 88) 00:08:07.198 15123.692 - 15224.517: 44.0515% ( 85) 00:08:07.198 15224.517 - 15325.342: 45.4633% ( 103) 00:08:07.198 15325.342 - 15426.166: 46.5598% ( 80) 00:08:07.198 15426.166 - 15526.991: 47.5329% ( 71) 00:08:07.198 15526.991 - 15627.815: 48.5883% ( 77) 00:08:07.198 15627.815 - 15728.640: 49.6025% ( 74) 00:08:07.198 15728.640 - 15829.465: 50.4660% ( 63) 00:08:07.198 15829.465 - 15930.289: 51.6447% ( 86) 00:08:07.198 15930.289 - 16031.114: 52.8646% ( 89) 00:08:07.198 16031.114 - 16131.938: 54.0022% ( 83) 00:08:07.198 16131.938 - 16232.763: 55.1398% ( 83) 00:08:07.198 16232.763 - 16333.588: 56.0855% ( 69) 00:08:07.198 16333.588 - 16434.412: 56.9764% ( 65) 00:08:07.198 16434.412 - 16535.237: 58.1414% ( 85) 00:08:07.198 16535.237 - 16636.062: 58.9638% ( 60) 00:08:07.198 16636.062 - 16736.886: 59.7725% ( 59) 00:08:07.198 16736.886 - 16837.711: 60.6771% ( 66) 00:08:07.198 16837.711 - 16938.535: 61.4583% ( 57) 00:08:07.198 16938.535 - 17039.360: 62.2807% ( 60) 00:08:07.198 17039.360 - 17140.185: 62.9386% ( 48) 00:08:07.198 17140.185 - 17241.009: 63.3635% ( 31) 00:08:07.198 17241.009 - 17341.834: 63.8295% ( 34) 00:08:07.198 17341.834 - 17442.658: 64.2681% ( 32) 00:08:07.198 17442.658 - 17543.483: 64.7615% ( 36) 00:08:07.198 17543.483 - 17644.308: 65.2138% ( 33) 00:08:07.198 17644.308 - 17745.132: 65.6935% ( 35) 00:08:07.198 17745.132 - 17845.957: 66.0362% ( 25) 00:08:07.198 17845.957 - 17946.782: 66.3925% ( 26) 00:08:07.198 17946.782 - 18047.606: 66.7078% ( 23) 00:08:07.198 18047.606 - 18148.431: 67.0504% ( 25) 00:08:07.198 18148.431 - 18249.255: 67.3657% ( 23) 00:08:07.198 18249.255 - 18350.080: 67.7083% ( 25) 00:08:07.198 18350.080 - 18450.905: 68.1880% ( 35) 00:08:07.198 18450.905 - 18551.729: 68.4896% ( 22) 00:08:07.198 18551.729 - 18652.554: 68.7363% ( 18) 00:08:07.198 18652.554 - 18753.378: 69.0241% ( 21) 00:08:07.198 18753.378 - 18854.203: 69.2845% ( 19) 00:08:07.198 18854.203 - 18955.028: 69.5312% ( 18) 00:08:07.198 18955.028 - 19055.852: 69.8191% ( 21) 00:08:07.198 19055.852 - 19156.677: 70.1343% ( 23) 00:08:07.198 19156.677 - 19257.502: 70.4221% ( 21) 00:08:07.198 19257.502 - 19358.326: 70.6140% ( 14) 00:08:07.198 19358.326 - 19459.151: 70.8059% ( 14) 00:08:07.198 19459.151 - 19559.975: 70.9704% ( 12) 00:08:07.198 19559.975 - 19660.800: 71.1075% ( 10) 00:08:07.198 19660.800 - 19761.625: 71.2582% ( 11) 00:08:07.198 19761.625 - 19862.449: 71.6009% ( 25) 00:08:07.198 19862.449 - 19963.274: 72.1354% ( 39) 00:08:07.198 19963.274 - 20064.098: 72.4644% ( 24) 00:08:07.198 20064.098 - 20164.923: 72.8344% ( 27) 00:08:07.198 20164.923 - 20265.748: 73.2456% ( 30) 00:08:07.198 20265.748 - 20366.572: 73.7939% ( 40) 00:08:07.198 20366.572 - 20467.397: 74.1913% ( 29) 00:08:07.198 20467.397 - 20568.222: 74.4929% ( 22) 00:08:07.198 20568.222 - 20669.046: 74.7259% ( 17) 00:08:07.198 20669.046 - 20769.871: 75.0274% ( 22) 00:08:07.198 20769.871 - 20870.695: 75.2741% ( 18) 00:08:07.198 20870.695 - 20971.520: 75.6716% ( 29) 00:08:07.198 20971.520 - 21072.345: 76.0005% ( 24) 00:08:07.198 21072.345 - 21173.169: 76.5625% ( 41) 00:08:07.198 21173.169 - 21273.994: 77.0833% ( 38) 00:08:07.198 21273.994 - 21374.818: 77.6727% ( 43) 00:08:07.198 21374.818 - 21475.643: 78.3991% ( 53) 00:08:07.198 21475.643 - 21576.468: 79.0433% ( 47) 00:08:07.198 21576.468 - 21677.292: 79.6327% ( 43) 00:08:07.198 21677.292 - 21778.117: 80.1809% ( 40) 00:08:07.198 21778.117 - 21878.942: 80.7566% ( 42) 00:08:07.198 21878.942 - 21979.766: 81.4145% ( 48) 00:08:07.198 21979.766 - 22080.591: 82.0998% ( 50) 00:08:07.198 22080.591 - 22181.415: 82.7303% ( 46) 00:08:07.198 22181.415 - 22282.240: 83.2511% ( 38) 00:08:07.198 22282.240 - 22383.065: 83.9227% ( 49) 00:08:07.198 22383.065 - 22483.889: 84.4435% ( 38) 00:08:07.198 22483.889 - 22584.714: 85.1700% ( 53) 00:08:07.198 22584.714 - 22685.538: 85.8690% ( 51) 00:08:07.198 22685.538 - 22786.363: 86.8421% ( 71) 00:08:07.198 22786.363 - 22887.188: 87.4863% ( 47) 00:08:07.198 22887.188 - 22988.012: 88.0894% ( 44) 00:08:07.198 22988.012 - 23088.837: 88.6513% ( 41) 00:08:07.198 23088.837 - 23189.662: 89.2955% ( 47) 00:08:07.198 23189.662 - 23290.486: 89.8712% ( 42) 00:08:07.198 23290.486 - 23391.311: 90.5702% ( 51) 00:08:07.198 23391.311 - 23492.135: 91.3240% ( 55) 00:08:07.198 23492.135 - 23592.960: 91.9271% ( 44) 00:08:07.198 23592.960 - 23693.785: 92.3794% ( 33) 00:08:07.198 23693.785 - 23794.609: 92.9139% ( 39) 00:08:07.198 23794.609 - 23895.434: 93.4073% ( 36) 00:08:07.198 23895.434 - 23996.258: 93.8871% ( 35) 00:08:07.198 23996.258 - 24097.083: 94.2708% ( 28) 00:08:07.198 24097.083 - 24197.908: 94.6546% ( 28) 00:08:07.198 24197.908 - 24298.732: 94.9973% ( 25) 00:08:07.198 24298.732 - 24399.557: 95.3673% ( 27) 00:08:07.198 24399.557 - 24500.382: 95.7785% ( 30) 00:08:07.198 24500.382 - 24601.206: 96.1075% ( 24) 00:08:07.198 24601.206 - 24702.031: 96.4090% ( 22) 00:08:07.198 24702.031 - 24802.855: 96.6694% ( 19) 00:08:07.198 24802.855 - 24903.680: 96.9298% ( 19) 00:08:07.199 24903.680 - 25004.505: 97.1491% ( 16) 00:08:07.199 25004.505 - 25105.329: 97.3273% ( 13) 00:08:07.199 25105.329 - 25206.154: 97.4644% ( 10) 00:08:07.199 25206.154 - 25306.978: 97.6014% ( 10) 00:08:07.199 25306.978 - 25407.803: 97.7659% ( 12) 00:08:07.199 25407.803 - 25508.628: 97.9030% ( 10) 00:08:07.199 25508.628 - 25609.452: 98.0674% ( 12) 00:08:07.199 25609.452 - 25710.277: 98.1497% ( 6) 00:08:07.199 25710.277 - 25811.102: 98.2182% ( 5) 00:08:07.199 25811.102 - 26012.751: 98.2456% ( 2) 00:08:07.199 38313.354 - 38515.003: 98.2867% ( 3) 00:08:07.199 38515.003 - 38716.652: 98.3141% ( 2) 00:08:07.199 38716.652 - 38918.302: 98.3690% ( 4) 00:08:07.199 38918.302 - 39119.951: 98.4101% ( 3) 00:08:07.199 39119.951 - 39321.600: 98.4512% ( 3) 00:08:07.199 39321.600 - 39523.249: 98.5060% ( 4) 00:08:07.199 39523.249 - 39724.898: 98.5471% ( 3) 00:08:07.199 39724.898 - 39926.548: 98.5883% ( 3) 00:08:07.199 39926.548 - 40128.197: 98.6157% ( 2) 00:08:07.199 40128.197 - 40329.846: 98.6568% ( 3) 00:08:07.199 40329.846 - 40531.495: 98.6979% ( 3) 00:08:07.199 40531.495 - 40733.145: 98.7527% ( 4) 00:08:07.199 40733.145 - 40934.794: 98.7939% ( 3) 00:08:07.199 40934.794 - 41136.443: 98.8213% ( 2) 00:08:07.199 41136.443 - 41338.092: 98.8761% ( 4) 00:08:07.199 41338.092 - 41539.742: 98.9172% ( 3) 00:08:07.199 41539.742 - 41741.391: 98.9583% ( 3) 00:08:07.199 41741.391 - 41943.040: 98.9995% ( 3) 00:08:07.199 41943.040 - 42144.689: 99.0543% ( 4) 00:08:07.199 42144.689 - 42346.338: 99.0954% ( 3) 00:08:07.199 42346.338 - 42547.988: 99.1228% ( 2) 00:08:07.199 52025.502 - 52428.800: 99.2188% ( 7) 00:08:07.199 52428.800 - 52832.098: 99.3010% ( 6) 00:08:07.199 52832.098 - 53235.397: 99.3832% ( 6) 00:08:07.199 53235.397 - 53638.695: 99.4655% ( 6) 00:08:07.199 53638.695 - 54041.994: 99.5614% ( 7) 00:08:07.199 54041.994 - 54445.292: 99.6436% ( 6) 00:08:07.199 54445.292 - 54848.591: 99.7259% ( 6) 00:08:07.199 54848.591 - 55251.889: 99.7944% ( 5) 00:08:07.199 55251.889 - 55655.188: 99.8766% ( 6) 00:08:07.199 55655.188 - 56058.486: 99.9726% ( 7) 00:08:07.199 56058.486 - 56461.785: 100.0000% ( 2) 00:08:07.199 00:08:07.199 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:07.199 ============================================================================== 00:08:07.199 Range in us Cumulative IO count 00:08:07.199 11947.717 - 11998.129: 0.0411% ( 3) 00:08:07.199 11998.129 - 12048.542: 0.1645% ( 9) 00:08:07.199 12048.542 - 12098.954: 0.2741% ( 8) 00:08:07.199 12098.954 - 12149.366: 0.3975% ( 9) 00:08:07.199 12149.366 - 12199.778: 0.5482% ( 11) 00:08:07.199 12199.778 - 12250.191: 0.7127% ( 12) 00:08:07.199 12250.191 - 12300.603: 0.9457% ( 17) 00:08:07.199 12300.603 - 12351.015: 1.2061% ( 19) 00:08:07.199 12351.015 - 12401.428: 1.4803% ( 20) 00:08:07.199 12401.428 - 12451.840: 1.8366% ( 26) 00:08:07.199 12451.840 - 12502.252: 2.3575% ( 38) 00:08:07.199 12502.252 - 12552.665: 2.9194% ( 41) 00:08:07.199 12552.665 - 12603.077: 3.4677% ( 40) 00:08:07.199 12603.077 - 12653.489: 4.0707% ( 44) 00:08:07.199 12653.489 - 12703.902: 4.5093% ( 32) 00:08:07.199 12703.902 - 12754.314: 4.9890% ( 35) 00:08:07.199 12754.314 - 12804.726: 5.5099% ( 38) 00:08:07.199 12804.726 - 12855.138: 6.0033% ( 36) 00:08:07.199 12855.138 - 12905.551: 6.6201% ( 45) 00:08:07.199 12905.551 - 13006.375: 7.8536% ( 90) 00:08:07.199 13006.375 - 13107.200: 9.0049% ( 84) 00:08:07.199 13107.200 - 13208.025: 10.5263% ( 111) 00:08:07.199 13208.025 - 13308.849: 11.9106% ( 101) 00:08:07.199 13308.849 - 13409.674: 13.4046% ( 109) 00:08:07.199 13409.674 - 13510.498: 14.8849% ( 108) 00:08:07.199 13510.498 - 13611.323: 16.6118% ( 126) 00:08:07.199 13611.323 - 13712.148: 18.5444% ( 141) 00:08:07.199 13712.148 - 13812.972: 20.4496% ( 139) 00:08:07.199 13812.972 - 13913.797: 22.3958% ( 142) 00:08:07.199 13913.797 - 14014.622: 23.8350% ( 105) 00:08:07.199 14014.622 - 14115.446: 25.3975% ( 114) 00:08:07.199 14115.446 - 14216.271: 26.9737% ( 115) 00:08:07.199 14216.271 - 14317.095: 28.5499% ( 115) 00:08:07.199 14317.095 - 14417.920: 30.1535% ( 117) 00:08:07.199 14417.920 - 14518.745: 31.7982% ( 120) 00:08:07.199 14518.745 - 14619.569: 33.5938% ( 131) 00:08:07.199 14619.569 - 14720.394: 34.9644% ( 100) 00:08:07.199 14720.394 - 14821.218: 36.5954% ( 119) 00:08:07.199 14821.218 - 14922.043: 38.0620% ( 107) 00:08:07.199 14922.043 - 15022.868: 39.5148% ( 106) 00:08:07.199 15022.868 - 15123.692: 40.7484% ( 90) 00:08:07.199 15123.692 - 15224.517: 41.9545% ( 88) 00:08:07.199 15224.517 - 15325.342: 43.2292% ( 93) 00:08:07.199 15325.342 - 15426.166: 44.6957% ( 107) 00:08:07.199 15426.166 - 15526.991: 46.2445% ( 113) 00:08:07.199 15526.991 - 15627.815: 47.6288% ( 101) 00:08:07.199 15627.815 - 15728.640: 49.0680% ( 105) 00:08:07.199 15728.640 - 15829.465: 50.4386% ( 100) 00:08:07.199 15829.465 - 15930.289: 51.5488% ( 81) 00:08:07.199 15930.289 - 16031.114: 52.8235% ( 93) 00:08:07.199 16031.114 - 16131.938: 54.0159% ( 87) 00:08:07.199 16131.938 - 16232.763: 55.4962% ( 108) 00:08:07.199 16232.763 - 16333.588: 56.9216% ( 104) 00:08:07.199 16333.588 - 16434.412: 58.0318% ( 81) 00:08:07.199 16434.412 - 16535.237: 59.1557% ( 82) 00:08:07.199 16535.237 - 16636.062: 60.0192% ( 63) 00:08:07.199 16636.062 - 16736.886: 60.7319% ( 52) 00:08:07.199 16736.886 - 16837.711: 61.5406% ( 59) 00:08:07.199 16837.711 - 16938.535: 62.2944% ( 55) 00:08:07.199 16938.535 - 17039.360: 62.9660% ( 49) 00:08:07.199 17039.360 - 17140.185: 63.5965% ( 46) 00:08:07.199 17140.185 - 17241.009: 64.2270% ( 46) 00:08:07.199 17241.009 - 17341.834: 64.7752% ( 40) 00:08:07.199 17341.834 - 17442.658: 65.2961% ( 38) 00:08:07.199 17442.658 - 17543.483: 65.7758% ( 35) 00:08:07.199 17543.483 - 17644.308: 66.2007% ( 31) 00:08:07.199 17644.308 - 17745.132: 66.6530% ( 33) 00:08:07.199 17745.132 - 17845.957: 67.0916% ( 32) 00:08:07.199 17845.957 - 17946.782: 67.5164% ( 31) 00:08:07.199 17946.782 - 18047.606: 67.8865% ( 27) 00:08:07.199 18047.606 - 18148.431: 68.2155% ( 24) 00:08:07.199 18148.431 - 18249.255: 68.6678% ( 33) 00:08:07.199 18249.255 - 18350.080: 68.9967% ( 24) 00:08:07.199 18350.080 - 18450.905: 69.3805% ( 28) 00:08:07.199 18450.905 - 18551.729: 69.7231% ( 25) 00:08:07.199 18551.729 - 18652.554: 70.0384% ( 23) 00:08:07.199 18652.554 - 18753.378: 70.2988% ( 19) 00:08:07.199 18753.378 - 18854.203: 70.5592% ( 19) 00:08:07.199 18854.203 - 18955.028: 70.7785% ( 16) 00:08:07.199 18955.028 - 19055.852: 71.0800% ( 22) 00:08:07.199 19055.852 - 19156.677: 71.3542% ( 20) 00:08:07.199 19156.677 - 19257.502: 71.6831% ( 24) 00:08:07.199 19257.502 - 19358.326: 71.9984% ( 23) 00:08:07.199 19358.326 - 19459.151: 72.3273% ( 24) 00:08:07.199 19459.151 - 19559.975: 72.5603% ( 17) 00:08:07.199 19559.975 - 19660.800: 72.9030% ( 25) 00:08:07.199 19660.800 - 19761.625: 73.0537% ( 11) 00:08:07.199 19761.625 - 19862.449: 73.2319% ( 13) 00:08:07.199 19862.449 - 19963.274: 73.4512% ( 16) 00:08:07.199 19963.274 - 20064.098: 73.6020% ( 11) 00:08:07.199 20064.098 - 20164.923: 73.9309% ( 24) 00:08:07.199 20164.923 - 20265.748: 74.1502% ( 16) 00:08:07.199 20265.748 - 20366.572: 74.3147% ( 12) 00:08:07.199 20366.572 - 20467.397: 74.4380% ( 9) 00:08:07.199 20467.397 - 20568.222: 74.6025% ( 12) 00:08:07.199 20568.222 - 20669.046: 74.7807% ( 13) 00:08:07.199 20669.046 - 20769.871: 74.9726% ( 14) 00:08:07.199 20769.871 - 20870.695: 75.2056% ( 17) 00:08:07.199 20870.695 - 20971.520: 75.4386% ( 17) 00:08:07.199 20971.520 - 21072.345: 75.7264% ( 21) 00:08:07.199 21072.345 - 21173.169: 76.0828% ( 26) 00:08:07.199 21173.169 - 21273.994: 76.3843% ( 22) 00:08:07.199 21273.994 - 21374.818: 76.6584% ( 20) 00:08:07.199 21374.818 - 21475.643: 77.1656% ( 37) 00:08:07.199 21475.643 - 21576.468: 77.8098% ( 47) 00:08:07.199 21576.468 - 21677.292: 78.4814% ( 49) 00:08:07.199 21677.292 - 21778.117: 79.0844% ( 44) 00:08:07.199 21778.117 - 21878.942: 79.7149% ( 46) 00:08:07.199 21878.942 - 21979.766: 80.3317% ( 45) 00:08:07.199 21979.766 - 22080.591: 81.0170% ( 50) 00:08:07.199 22080.591 - 22181.415: 81.6064% ( 43) 00:08:07.199 22181.415 - 22282.240: 82.2231% ( 45) 00:08:07.199 22282.240 - 22383.065: 83.1552% ( 68) 00:08:07.199 22383.065 - 22483.889: 83.8953% ( 54) 00:08:07.199 22483.889 - 22584.714: 84.8136% ( 67) 00:08:07.199 22584.714 - 22685.538: 85.6360% ( 60) 00:08:07.199 22685.538 - 22786.363: 86.5269% ( 65) 00:08:07.199 22786.363 - 22887.188: 87.5137% ( 72) 00:08:07.199 22887.188 - 22988.012: 88.2675% ( 55) 00:08:07.199 22988.012 - 23088.837: 88.8980% ( 46) 00:08:07.199 23088.837 - 23189.662: 89.6245% ( 53) 00:08:07.199 23189.662 - 23290.486: 90.2686% ( 47) 00:08:07.199 23290.486 - 23391.311: 90.8032% ( 39) 00:08:07.199 23391.311 - 23492.135: 91.3925% ( 43) 00:08:07.199 23492.135 - 23592.960: 92.1464% ( 55) 00:08:07.199 23592.960 - 23693.785: 92.6809% ( 39) 00:08:07.199 23693.785 - 23794.609: 93.2155% ( 39) 00:08:07.199 23794.609 - 23895.434: 93.6404% ( 31) 00:08:07.199 23895.434 - 23996.258: 94.0378% ( 29) 00:08:07.199 23996.258 - 24097.083: 94.4627% ( 31) 00:08:07.199 24097.083 - 24197.908: 94.9013% ( 32) 00:08:07.199 24197.908 - 24298.732: 95.2714% ( 27) 00:08:07.199 24298.732 - 24399.557: 95.6689% ( 29) 00:08:07.199 24399.557 - 24500.382: 96.0389% ( 27) 00:08:07.199 24500.382 - 24601.206: 96.3405% ( 22) 00:08:07.199 24601.206 - 24702.031: 96.6146% ( 20) 00:08:07.199 24702.031 - 24802.855: 96.8750% ( 19) 00:08:07.199 24802.855 - 24903.680: 97.0943% ( 16) 00:08:07.199 24903.680 - 25004.505: 97.2999% ( 15) 00:08:07.199 25004.505 - 25105.329: 97.5055% ( 15) 00:08:07.199 25105.329 - 25206.154: 97.6425% ( 10) 00:08:07.199 25206.154 - 25306.978: 97.7796% ( 10) 00:08:07.199 25306.978 - 25407.803: 97.9030% ( 9) 00:08:07.199 25407.803 - 25508.628: 98.0263% ( 9) 00:08:07.199 25508.628 - 25609.452: 98.0948% ( 5) 00:08:07.199 25609.452 - 25710.277: 98.1771% ( 6) 00:08:07.200 25710.277 - 25811.102: 98.2182% ( 3) 00:08:07.200 25811.102 - 26012.751: 98.2456% ( 2) 00:08:07.200 35288.615 - 35490.265: 98.2593% ( 1) 00:08:07.200 35490.265 - 35691.914: 98.3004% ( 3) 00:08:07.200 35691.914 - 35893.563: 98.3279% ( 2) 00:08:07.200 35893.563 - 36095.212: 98.3690% ( 3) 00:08:07.200 36095.212 - 36296.862: 98.4101% ( 3) 00:08:07.200 36296.862 - 36498.511: 98.4512% ( 3) 00:08:07.200 36498.511 - 36700.160: 98.4923% ( 3) 00:08:07.200 36700.160 - 36901.809: 98.5334% ( 3) 00:08:07.200 36901.809 - 37103.458: 98.5746% ( 3) 00:08:07.200 37103.458 - 37305.108: 98.6157% ( 3) 00:08:07.200 37305.108 - 37506.757: 98.6568% ( 3) 00:08:07.200 37506.757 - 37708.406: 98.6979% ( 3) 00:08:07.200 37708.406 - 37910.055: 98.7527% ( 4) 00:08:07.200 37910.055 - 38111.705: 98.7939% ( 3) 00:08:07.200 38111.705 - 38313.354: 98.8350% ( 3) 00:08:07.200 38313.354 - 38515.003: 98.8761% ( 3) 00:08:07.200 38515.003 - 38716.652: 98.9172% ( 3) 00:08:07.200 38716.652 - 38918.302: 98.9583% ( 3) 00:08:07.200 38918.302 - 39119.951: 99.0132% ( 4) 00:08:07.200 39119.951 - 39321.600: 99.0543% ( 3) 00:08:07.200 39321.600 - 39523.249: 99.0954% ( 3) 00:08:07.200 39523.249 - 39724.898: 99.1228% ( 2) 00:08:07.200 49605.711 - 49807.360: 99.1639% ( 3) 00:08:07.200 49807.360 - 50009.009: 99.2050% ( 3) 00:08:07.200 50009.009 - 50210.658: 99.2462% ( 3) 00:08:07.200 50210.658 - 50412.308: 99.2873% ( 3) 00:08:07.200 50412.308 - 50613.957: 99.3284% ( 3) 00:08:07.200 50613.957 - 50815.606: 99.3695% ( 3) 00:08:07.200 50815.606 - 51017.255: 99.4106% ( 3) 00:08:07.200 51017.255 - 51218.905: 99.4518% ( 3) 00:08:07.200 51218.905 - 51420.554: 99.4792% ( 2) 00:08:07.200 51420.554 - 51622.203: 99.5340% ( 4) 00:08:07.200 51622.203 - 52025.502: 99.6162% ( 6) 00:08:07.200 52025.502 - 52428.800: 99.6985% ( 6) 00:08:07.200 52428.800 - 52832.098: 99.7944% ( 7) 00:08:07.200 52832.098 - 53235.397: 99.8766% ( 6) 00:08:07.200 53235.397 - 53638.695: 99.9726% ( 7) 00:08:07.200 53638.695 - 54041.994: 100.0000% ( 2) 00:08:07.200 00:08:07.200 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:07.200 ============================================================================== 00:08:07.200 Range in us Cumulative IO count 00:08:07.200 11947.717 - 11998.129: 0.0411% ( 3) 00:08:07.200 11998.129 - 12048.542: 0.1096% ( 5) 00:08:07.200 12048.542 - 12098.954: 0.2878% ( 13) 00:08:07.200 12098.954 - 12149.366: 0.4386% ( 11) 00:08:07.200 12149.366 - 12199.778: 0.6031% ( 12) 00:08:07.200 12199.778 - 12250.191: 0.7950% ( 14) 00:08:07.200 12250.191 - 12300.603: 0.9731% ( 13) 00:08:07.200 12300.603 - 12351.015: 1.1924% ( 16) 00:08:07.200 12351.015 - 12401.428: 1.4117% ( 16) 00:08:07.200 12401.428 - 12451.840: 1.6310% ( 16) 00:08:07.200 12451.840 - 12502.252: 1.9052% ( 20) 00:08:07.200 12502.252 - 12552.665: 2.2478% ( 25) 00:08:07.200 12552.665 - 12603.077: 2.7138% ( 34) 00:08:07.200 12603.077 - 12653.489: 3.2484% ( 39) 00:08:07.200 12653.489 - 12703.902: 3.7281% ( 35) 00:08:07.200 12703.902 - 12754.314: 4.3723% ( 47) 00:08:07.200 12754.314 - 12804.726: 5.1124% ( 54) 00:08:07.200 12804.726 - 12855.138: 5.8114% ( 51) 00:08:07.200 12855.138 - 12905.551: 6.6749% ( 63) 00:08:07.200 12905.551 - 13006.375: 8.2922% ( 118) 00:08:07.200 13006.375 - 13107.200: 10.0466% ( 128) 00:08:07.200 13107.200 - 13208.025: 11.7599% ( 125) 00:08:07.200 13208.025 - 13308.849: 13.5828% ( 133) 00:08:07.200 13308.849 - 13409.674: 15.2138% ( 119) 00:08:07.200 13409.674 - 13510.498: 16.5844% ( 100) 00:08:07.200 13510.498 - 13611.323: 17.9413% ( 99) 00:08:07.200 13611.323 - 13712.148: 19.4490% ( 110) 00:08:07.200 13712.148 - 13812.972: 21.4364% ( 145) 00:08:07.200 13812.972 - 13913.797: 23.1908% ( 128) 00:08:07.200 13913.797 - 14014.622: 25.0274% ( 134) 00:08:07.200 14014.622 - 14115.446: 26.4529% ( 104) 00:08:07.200 14115.446 - 14216.271: 27.6042% ( 84) 00:08:07.200 14216.271 - 14317.095: 28.6184% ( 74) 00:08:07.200 14317.095 - 14417.920: 29.9205% ( 95) 00:08:07.200 14417.920 - 14518.745: 31.1678% ( 91) 00:08:07.200 14518.745 - 14619.569: 32.6754% ( 110) 00:08:07.200 14619.569 - 14720.394: 34.2105% ( 112) 00:08:07.200 14720.394 - 14821.218: 36.0746% ( 136) 00:08:07.200 14821.218 - 14922.043: 37.8427% ( 129) 00:08:07.200 14922.043 - 15022.868: 39.5285% ( 123) 00:08:07.200 15022.868 - 15123.692: 41.0773% ( 113) 00:08:07.200 15123.692 - 15224.517: 42.4342% ( 99) 00:08:07.200 15224.517 - 15325.342: 44.1338% ( 124) 00:08:07.200 15325.342 - 15426.166: 45.6277% ( 109) 00:08:07.200 15426.166 - 15526.991: 46.9161% ( 94) 00:08:07.200 15526.991 - 15627.815: 48.1497% ( 90) 00:08:07.200 15627.815 - 15728.640: 49.3969% ( 91) 00:08:07.200 15728.640 - 15829.465: 50.6442% ( 91) 00:08:07.200 15829.465 - 15930.289: 51.8366% ( 87) 00:08:07.200 15930.289 - 16031.114: 53.0291% ( 87) 00:08:07.200 16031.114 - 16131.938: 54.1393% ( 81) 00:08:07.200 16131.938 - 16232.763: 55.3454% ( 88) 00:08:07.200 16232.763 - 16333.588: 56.4693% ( 82) 00:08:07.200 16333.588 - 16434.412: 57.6343% ( 85) 00:08:07.200 16434.412 - 16535.237: 58.8953% ( 92) 00:08:07.200 16535.237 - 16636.062: 59.7999% ( 66) 00:08:07.200 16636.062 - 16736.886: 60.6908% ( 65) 00:08:07.200 16736.886 - 16837.711: 61.5269% ( 61) 00:08:07.200 16837.711 - 16938.535: 62.2122% ( 50) 00:08:07.200 16938.535 - 17039.360: 62.8015% ( 43) 00:08:07.200 17039.360 - 17140.185: 63.4183% ( 45) 00:08:07.200 17140.185 - 17241.009: 63.9117% ( 36) 00:08:07.200 17241.009 - 17341.834: 64.5422% ( 46) 00:08:07.200 17341.834 - 17442.658: 65.0082% ( 34) 00:08:07.200 17442.658 - 17543.483: 65.3920% ( 28) 00:08:07.200 17543.483 - 17644.308: 65.8169% ( 31) 00:08:07.200 17644.308 - 17745.132: 66.3103% ( 36) 00:08:07.200 17745.132 - 17845.957: 66.8586% ( 40) 00:08:07.200 17845.957 - 17946.782: 67.2971% ( 32) 00:08:07.200 17946.782 - 18047.606: 67.6672% ( 27) 00:08:07.200 18047.606 - 18148.431: 68.1195% ( 33) 00:08:07.200 18148.431 - 18249.255: 68.6266% ( 37) 00:08:07.200 18249.255 - 18350.080: 69.0104% ( 28) 00:08:07.200 18350.080 - 18450.905: 69.3668% ( 26) 00:08:07.200 18450.905 - 18551.729: 69.6957% ( 24) 00:08:07.200 18551.729 - 18652.554: 69.9561% ( 19) 00:08:07.200 18652.554 - 18753.378: 70.1617% ( 15) 00:08:07.200 18753.378 - 18854.203: 70.3125% ( 11) 00:08:07.200 18854.203 - 18955.028: 70.5318% ( 16) 00:08:07.200 18955.028 - 19055.852: 70.7785% ( 18) 00:08:07.200 19055.852 - 19156.677: 71.0938% ( 23) 00:08:07.200 19156.677 - 19257.502: 71.3679% ( 20) 00:08:07.200 19257.502 - 19358.326: 71.6420% ( 20) 00:08:07.200 19358.326 - 19459.151: 71.9161% ( 20) 00:08:07.200 19459.151 - 19559.975: 72.1354% ( 16) 00:08:07.200 19559.975 - 19660.800: 72.3684% ( 17) 00:08:07.200 19660.800 - 19761.625: 72.5877% ( 16) 00:08:07.200 19761.625 - 19862.449: 72.8207% ( 17) 00:08:07.200 19862.449 - 19963.274: 73.0674% ( 18) 00:08:07.200 19963.274 - 20064.098: 73.2867% ( 16) 00:08:07.200 20064.098 - 20164.923: 73.5060% ( 16) 00:08:07.200 20164.923 - 20265.748: 73.6568% ( 11) 00:08:07.200 20265.748 - 20366.572: 73.9446% ( 21) 00:08:07.200 20366.572 - 20467.397: 74.2736% ( 24) 00:08:07.200 20467.397 - 20568.222: 74.5888% ( 23) 00:08:07.200 20568.222 - 20669.046: 74.8904% ( 22) 00:08:07.200 20669.046 - 20769.871: 75.3701% ( 35) 00:08:07.200 20769.871 - 20870.695: 75.6716% ( 22) 00:08:07.200 20870.695 - 20971.520: 75.9046% ( 17) 00:08:07.200 20971.520 - 21072.345: 76.1650% ( 19) 00:08:07.200 21072.345 - 21173.169: 76.4940% ( 24) 00:08:07.200 21173.169 - 21273.994: 76.8777% ( 28) 00:08:07.200 21273.994 - 21374.818: 77.3163% ( 32) 00:08:07.200 21374.818 - 21475.643: 77.8783% ( 41) 00:08:07.200 21475.643 - 21576.468: 78.3306% ( 33) 00:08:07.200 21576.468 - 21677.292: 78.7966% ( 34) 00:08:07.200 21677.292 - 21778.117: 79.2626% ( 34) 00:08:07.200 21778.117 - 21878.942: 79.7149% ( 33) 00:08:07.200 21878.942 - 21979.766: 80.2495% ( 39) 00:08:07.200 21979.766 - 22080.591: 81.1678% ( 67) 00:08:07.200 22080.591 - 22181.415: 81.7160% ( 40) 00:08:07.200 22181.415 - 22282.240: 82.3876% ( 49) 00:08:07.200 22282.240 - 22383.065: 83.3059% ( 67) 00:08:07.200 22383.065 - 22483.889: 84.1283% ( 60) 00:08:07.200 22483.889 - 22584.714: 84.9918% ( 63) 00:08:07.200 22584.714 - 22685.538: 85.8004% ( 59) 00:08:07.200 22685.538 - 22786.363: 86.5954% ( 58) 00:08:07.200 22786.363 - 22887.188: 87.6234% ( 75) 00:08:07.200 22887.188 - 22988.012: 88.4731% ( 62) 00:08:07.200 22988.012 - 23088.837: 89.1996% ( 53) 00:08:07.200 23088.837 - 23189.662: 90.0219% ( 60) 00:08:07.200 23189.662 - 23290.486: 90.8169% ( 58) 00:08:07.200 23290.486 - 23391.311: 91.3377% ( 38) 00:08:07.200 23391.311 - 23492.135: 91.7900% ( 33) 00:08:07.200 23492.135 - 23592.960: 92.2697% ( 35) 00:08:07.200 23592.960 - 23693.785: 92.7769% ( 37) 00:08:07.200 23693.785 - 23794.609: 93.2429% ( 34) 00:08:07.200 23794.609 - 23895.434: 93.6952% ( 33) 00:08:07.200 23895.434 - 23996.258: 94.0927% ( 29) 00:08:07.200 23996.258 - 24097.083: 94.5038% ( 30) 00:08:07.200 24097.083 - 24197.908: 94.9150% ( 30) 00:08:07.200 24197.908 - 24298.732: 95.3399% ( 31) 00:08:07.200 24298.732 - 24399.557: 95.7648% ( 31) 00:08:07.200 24399.557 - 24500.382: 96.0938% ( 24) 00:08:07.200 24500.382 - 24601.206: 96.3953% ( 22) 00:08:07.200 24601.206 - 24702.031: 96.6557% ( 19) 00:08:07.200 24702.031 - 24802.855: 96.9572% ( 22) 00:08:07.200 24802.855 - 24903.680: 97.1491% ( 14) 00:08:07.200 24903.680 - 25004.505: 97.3684% ( 16) 00:08:07.200 25004.505 - 25105.329: 97.5603% ( 14) 00:08:07.200 25105.329 - 25206.154: 97.7385% ( 13) 00:08:07.200 25206.154 - 25306.978: 97.9030% ( 12) 00:08:07.200 25306.978 - 25407.803: 98.0537% ( 11) 00:08:07.200 25407.803 - 25508.628: 98.1360% ( 6) 00:08:07.200 25508.628 - 25609.452: 98.1908% ( 4) 00:08:07.200 25609.452 - 25710.277: 98.2182% ( 2) 00:08:07.200 25710.277 - 25811.102: 98.2456% ( 2) 00:08:07.200 32465.526 - 32667.175: 98.2593% ( 1) 00:08:07.201 32667.175 - 32868.825: 98.3004% ( 3) 00:08:07.201 32868.825 - 33070.474: 98.3416% ( 3) 00:08:07.201 33070.474 - 33272.123: 98.3827% ( 3) 00:08:07.201 33272.123 - 33473.772: 98.4238% ( 3) 00:08:07.201 33473.772 - 33675.422: 98.4512% ( 2) 00:08:07.201 33675.422 - 33877.071: 98.5060% ( 4) 00:08:07.201 33877.071 - 34078.720: 98.5471% ( 3) 00:08:07.201 34078.720 - 34280.369: 98.5883% ( 3) 00:08:07.201 34280.369 - 34482.018: 98.6294% ( 3) 00:08:07.201 34482.018 - 34683.668: 98.6842% ( 4) 00:08:07.201 34683.668 - 34885.317: 98.7253% ( 3) 00:08:07.201 34885.317 - 35086.966: 98.7664% ( 3) 00:08:07.201 35086.966 - 35288.615: 98.8076% ( 3) 00:08:07.201 35288.615 - 35490.265: 98.8487% ( 3) 00:08:07.201 35490.265 - 35691.914: 98.9035% ( 4) 00:08:07.201 35691.914 - 35893.563: 98.9446% ( 3) 00:08:07.201 35893.563 - 36095.212: 98.9720% ( 2) 00:08:07.201 36095.212 - 36296.862: 99.0269% ( 4) 00:08:07.201 36296.862 - 36498.511: 99.0680% ( 3) 00:08:07.201 36498.511 - 36700.160: 99.1091% ( 3) 00:08:07.201 36700.160 - 36901.809: 99.1228% ( 1) 00:08:07.201 46580.972 - 46782.622: 99.1365% ( 1) 00:08:07.201 46782.622 - 46984.271: 99.1776% ( 3) 00:08:07.201 46984.271 - 47185.920: 99.2188% ( 3) 00:08:07.201 47185.920 - 47387.569: 99.2599% ( 3) 00:08:07.201 47387.569 - 47589.218: 99.3147% ( 4) 00:08:07.201 47589.218 - 47790.868: 99.3421% ( 2) 00:08:07.201 47790.868 - 47992.517: 99.3832% ( 3) 00:08:07.201 47992.517 - 48194.166: 99.4243% ( 3) 00:08:07.201 48194.166 - 48395.815: 99.4518% ( 2) 00:08:07.201 48395.815 - 48597.465: 99.5066% ( 4) 00:08:07.201 48597.465 - 48799.114: 99.5340% ( 2) 00:08:07.201 48799.114 - 49000.763: 99.5751% ( 3) 00:08:07.201 49000.763 - 49202.412: 99.6162% ( 3) 00:08:07.201 49202.412 - 49404.062: 99.6573% ( 3) 00:08:07.201 49404.062 - 49605.711: 99.6985% ( 3) 00:08:07.201 49605.711 - 49807.360: 99.7396% ( 3) 00:08:07.201 49807.360 - 50009.009: 99.7807% ( 3) 00:08:07.201 50009.009 - 50210.658: 99.8218% ( 3) 00:08:07.201 50210.658 - 50412.308: 99.8629% ( 3) 00:08:07.201 50412.308 - 50613.957: 99.9178% ( 4) 00:08:07.201 50613.957 - 50815.606: 99.9589% ( 3) 00:08:07.201 50815.606 - 51017.255: 100.0000% ( 3) 00:08:07.201 00:08:07.201 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:07.201 ============================================================================== 00:08:07.201 Range in us Cumulative IO count 00:08:07.201 12048.542 - 12098.954: 0.0137% ( 1) 00:08:07.201 12098.954 - 12149.366: 0.0548% ( 3) 00:08:07.201 12149.366 - 12199.778: 0.2056% ( 11) 00:08:07.201 12199.778 - 12250.191: 0.3289% ( 9) 00:08:07.201 12250.191 - 12300.603: 0.5071% ( 13) 00:08:07.201 12300.603 - 12351.015: 0.7538% ( 18) 00:08:07.201 12351.015 - 12401.428: 1.0143% ( 19) 00:08:07.201 12401.428 - 12451.840: 1.2610% ( 18) 00:08:07.201 12451.840 - 12502.252: 1.7133% ( 33) 00:08:07.201 12502.252 - 12552.665: 2.2341% ( 38) 00:08:07.201 12552.665 - 12603.077: 2.6864% ( 33) 00:08:07.201 12603.077 - 12653.489: 3.1524% ( 34) 00:08:07.201 12653.489 - 12703.902: 3.7418% ( 43) 00:08:07.201 12703.902 - 12754.314: 4.2626% ( 38) 00:08:07.201 12754.314 - 12804.726: 4.8794% ( 45) 00:08:07.201 12804.726 - 12855.138: 5.4825% ( 44) 00:08:07.201 12855.138 - 12905.551: 6.1815% ( 51) 00:08:07.201 12905.551 - 13006.375: 7.4698% ( 94) 00:08:07.201 13006.375 - 13107.200: 8.9775% ( 110) 00:08:07.201 13107.200 - 13208.025: 10.8416% ( 136) 00:08:07.201 13208.025 - 13308.849: 12.6096% ( 129) 00:08:07.201 13308.849 - 13409.674: 14.4052% ( 131) 00:08:07.201 13409.674 - 13510.498: 16.1595% ( 128) 00:08:07.201 13510.498 - 13611.323: 17.7769% ( 118) 00:08:07.201 13611.323 - 13712.148: 19.4901% ( 125) 00:08:07.201 13712.148 - 13812.972: 21.0800% ( 116) 00:08:07.201 13812.972 - 13913.797: 22.7248% ( 120) 00:08:07.201 13913.797 - 14014.622: 23.9995% ( 93) 00:08:07.201 14014.622 - 14115.446: 25.3427% ( 98) 00:08:07.201 14115.446 - 14216.271: 26.7955% ( 106) 00:08:07.201 14216.271 - 14317.095: 28.4814% ( 123) 00:08:07.201 14317.095 - 14417.920: 30.2083% ( 126) 00:08:07.201 14417.920 - 14518.745: 31.7708% ( 114) 00:08:07.201 14518.745 - 14619.569: 33.4019% ( 119) 00:08:07.201 14619.569 - 14720.394: 35.1014% ( 124) 00:08:07.201 14720.394 - 14821.218: 36.4857% ( 101) 00:08:07.201 14821.218 - 14922.043: 37.9797% ( 109) 00:08:07.201 14922.043 - 15022.868: 39.5559% ( 115) 00:08:07.201 15022.868 - 15123.692: 41.0636% ( 110) 00:08:07.201 15123.692 - 15224.517: 42.7220% ( 121) 00:08:07.201 15224.517 - 15325.342: 44.0378% ( 96) 00:08:07.201 15325.342 - 15426.166: 45.3125% ( 93) 00:08:07.201 15426.166 - 15526.991: 46.5598% ( 91) 00:08:07.201 15526.991 - 15627.815: 47.7248% ( 85) 00:08:07.201 15627.815 - 15728.640: 49.1639% ( 105) 00:08:07.201 15728.640 - 15829.465: 50.6031% ( 105) 00:08:07.201 15829.465 - 15930.289: 51.9463% ( 98) 00:08:07.201 15930.289 - 16031.114: 53.2758% ( 97) 00:08:07.201 16031.114 - 16131.938: 54.3448% ( 78) 00:08:07.201 16131.938 - 16232.763: 55.6058% ( 92) 00:08:07.201 16232.763 - 16333.588: 56.7297% ( 82) 00:08:07.201 16333.588 - 16434.412: 57.9359% ( 88) 00:08:07.201 16434.412 - 16535.237: 59.0872% ( 84) 00:08:07.201 16535.237 - 16636.062: 60.0740% ( 72) 00:08:07.201 16636.062 - 16736.886: 60.9101% ( 61) 00:08:07.201 16736.886 - 16837.711: 61.5954% ( 50) 00:08:07.201 16837.711 - 16938.535: 62.2944% ( 51) 00:08:07.201 16938.535 - 17039.360: 63.1853% ( 65) 00:08:07.201 17039.360 - 17140.185: 64.1036% ( 67) 00:08:07.201 17140.185 - 17241.009: 64.7478% ( 47) 00:08:07.201 17241.009 - 17341.834: 65.4057% ( 48) 00:08:07.201 17341.834 - 17442.658: 65.9951% ( 43) 00:08:07.201 17442.658 - 17543.483: 66.5159% ( 38) 00:08:07.201 17543.483 - 17644.308: 67.0504% ( 39) 00:08:07.201 17644.308 - 17745.132: 67.4890% ( 32) 00:08:07.201 17745.132 - 17845.957: 67.8591% ( 27) 00:08:07.201 17845.957 - 17946.782: 68.2018% ( 25) 00:08:07.201 17946.782 - 18047.606: 68.5170% ( 23) 00:08:07.201 18047.606 - 18148.431: 68.8734% ( 26) 00:08:07.201 18148.431 - 18249.255: 69.1749% ( 22) 00:08:07.201 18249.255 - 18350.080: 69.4764% ( 22) 00:08:07.201 18350.080 - 18450.905: 69.7231% ( 18) 00:08:07.201 18450.905 - 18551.729: 69.8739% ( 11) 00:08:07.201 18551.729 - 18652.554: 70.0247% ( 11) 00:08:07.201 18652.554 - 18753.378: 70.2029% ( 13) 00:08:07.201 18753.378 - 18854.203: 70.4359% ( 17) 00:08:07.201 18854.203 - 18955.028: 70.6277% ( 14) 00:08:07.201 18955.028 - 19055.852: 70.9841% ( 26) 00:08:07.201 19055.852 - 19156.677: 71.2171% ( 17) 00:08:07.201 19156.677 - 19257.502: 71.4912% ( 20) 00:08:07.201 19257.502 - 19358.326: 71.8065% ( 23) 00:08:07.201 19358.326 - 19459.151: 72.0395% ( 17) 00:08:07.201 19459.151 - 19559.975: 72.4232% ( 28) 00:08:07.201 19559.975 - 19660.800: 72.6151% ( 14) 00:08:07.201 19660.800 - 19761.625: 72.8481% ( 17) 00:08:07.201 19761.625 - 19862.449: 73.1497% ( 22) 00:08:07.201 19862.449 - 19963.274: 73.4375% ( 21) 00:08:07.201 19963.274 - 20064.098: 73.7116% ( 20) 00:08:07.201 20064.098 - 20164.923: 74.0269% ( 23) 00:08:07.201 20164.923 - 20265.748: 74.2188% ( 14) 00:08:07.201 20265.748 - 20366.572: 74.4792% ( 19) 00:08:07.201 20366.572 - 20467.397: 74.6711% ( 14) 00:08:07.201 20467.397 - 20568.222: 74.9041% ( 17) 00:08:07.201 20568.222 - 20669.046: 75.1371% ( 17) 00:08:07.201 20669.046 - 20769.871: 75.3975% ( 19) 00:08:07.201 20769.871 - 20870.695: 75.9046% ( 37) 00:08:07.201 20870.695 - 20971.520: 76.1787% ( 20) 00:08:07.201 20971.520 - 21072.345: 76.6173% ( 32) 00:08:07.201 21072.345 - 21173.169: 77.0148% ( 29) 00:08:07.201 21173.169 - 21273.994: 77.5356% ( 38) 00:08:07.201 21273.994 - 21374.818: 78.0291% ( 36) 00:08:07.201 21374.818 - 21475.643: 78.3580% ( 24) 00:08:07.201 21475.643 - 21576.468: 78.7418% ( 28) 00:08:07.201 21576.468 - 21677.292: 79.0159% ( 20) 00:08:07.201 21677.292 - 21778.117: 79.5779% ( 41) 00:08:07.201 21778.117 - 21878.942: 80.2220% ( 47) 00:08:07.201 21878.942 - 21979.766: 80.9073% ( 50) 00:08:07.201 21979.766 - 22080.591: 81.4556% ( 40) 00:08:07.201 22080.591 - 22181.415: 82.0724% ( 45) 00:08:07.201 22181.415 - 22282.240: 82.7303% ( 48) 00:08:07.201 22282.240 - 22383.065: 83.3470% ( 45) 00:08:07.201 22383.065 - 22483.889: 83.9775% ( 46) 00:08:07.201 22483.889 - 22584.714: 84.9918% ( 74) 00:08:07.201 22584.714 - 22685.538: 85.7867% ( 58) 00:08:07.202 22685.538 - 22786.363: 86.4995% ( 52) 00:08:07.202 22786.363 - 22887.188: 87.5959% ( 80) 00:08:07.202 22887.188 - 22988.012: 88.3498% ( 55) 00:08:07.202 22988.012 - 23088.837: 89.0077% ( 48) 00:08:07.202 23088.837 - 23189.662: 89.6793% ( 49) 00:08:07.202 23189.662 - 23290.486: 90.4468% ( 56) 00:08:07.202 23290.486 - 23391.311: 91.0225% ( 42) 00:08:07.202 23391.311 - 23492.135: 91.6530% ( 46) 00:08:07.202 23492.135 - 23592.960: 92.1464% ( 36) 00:08:07.202 23592.960 - 23693.785: 92.5713% ( 31) 00:08:07.202 23693.785 - 23794.609: 93.0373% ( 34) 00:08:07.202 23794.609 - 23895.434: 93.5170% ( 35) 00:08:07.202 23895.434 - 23996.258: 94.0378% ( 38) 00:08:07.202 23996.258 - 24097.083: 94.4216% ( 28) 00:08:07.202 24097.083 - 24197.908: 94.8191% ( 29) 00:08:07.202 24197.908 - 24298.732: 95.2303% ( 30) 00:08:07.202 24298.732 - 24399.557: 95.6003% ( 27) 00:08:07.202 24399.557 - 24500.382: 95.9430% ( 25) 00:08:07.202 24500.382 - 24601.206: 96.2445% ( 22) 00:08:07.202 24601.206 - 24702.031: 96.5872% ( 25) 00:08:07.202 24702.031 - 24802.855: 96.8613% ( 20) 00:08:07.202 24802.855 - 24903.680: 97.1491% ( 21) 00:08:07.202 24903.680 - 25004.505: 97.3410% ( 14) 00:08:07.202 25004.505 - 25105.329: 97.4918% ( 11) 00:08:07.202 25105.329 - 25206.154: 97.6562% ( 12) 00:08:07.202 25206.154 - 25306.978: 97.8070% ( 11) 00:08:07.202 25306.978 - 25407.803: 97.9852% ( 13) 00:08:07.202 25407.803 - 25508.628: 98.0948% ( 8) 00:08:07.202 25508.628 - 25609.452: 98.1497% ( 4) 00:08:07.202 25609.452 - 25710.277: 98.2045% ( 4) 00:08:07.202 25710.277 - 25811.102: 98.2319% ( 2) 00:08:07.202 25811.102 - 26012.751: 98.2456% ( 1) 00:08:07.202 29642.437 - 29844.086: 98.2730% ( 2) 00:08:07.202 29844.086 - 30045.735: 98.3141% ( 3) 00:08:07.202 30045.735 - 30247.385: 98.3690% ( 4) 00:08:07.202 30247.385 - 30449.034: 98.3964% ( 2) 00:08:07.202 30449.034 - 30650.683: 98.4375% ( 3) 00:08:07.202 30650.683 - 30852.332: 98.4786% ( 3) 00:08:07.202 30852.332 - 31053.982: 98.5060% ( 2) 00:08:07.202 31053.982 - 31255.631: 98.5609% ( 4) 00:08:07.202 31255.631 - 31457.280: 98.6020% ( 3) 00:08:07.202 31457.280 - 31658.929: 98.6431% ( 3) 00:08:07.202 31658.929 - 31860.578: 98.6842% ( 3) 00:08:07.202 31860.578 - 32062.228: 98.7390% ( 4) 00:08:07.202 32062.228 - 32263.877: 98.7802% ( 3) 00:08:07.202 32263.877 - 32465.526: 98.8213% ( 3) 00:08:07.202 32465.526 - 32667.175: 98.8624% ( 3) 00:08:07.202 32667.175 - 32868.825: 98.9035% ( 3) 00:08:07.202 32868.825 - 33070.474: 98.9446% ( 3) 00:08:07.202 33070.474 - 33272.123: 98.9995% ( 4) 00:08:07.202 33272.123 - 33473.772: 99.0406% ( 3) 00:08:07.202 33473.772 - 33675.422: 99.0817% ( 3) 00:08:07.202 33675.422 - 33877.071: 99.1228% ( 3) 00:08:07.202 43757.883 - 43959.532: 99.1365% ( 1) 00:08:07.202 43959.532 - 44161.182: 99.1776% ( 3) 00:08:07.202 44161.182 - 44362.831: 99.2188% ( 3) 00:08:07.202 44362.831 - 44564.480: 99.2599% ( 3) 00:08:07.202 44564.480 - 44766.129: 99.3010% ( 3) 00:08:07.202 44766.129 - 44967.778: 99.3421% ( 3) 00:08:07.202 44967.778 - 45169.428: 99.3832% ( 3) 00:08:07.202 45169.428 - 45371.077: 99.4243% ( 3) 00:08:07.202 45371.077 - 45572.726: 99.4655% ( 3) 00:08:07.202 45572.726 - 45774.375: 99.4929% ( 2) 00:08:07.202 45774.375 - 45976.025: 99.5477% ( 4) 00:08:07.202 45976.025 - 46177.674: 99.5888% ( 3) 00:08:07.202 46177.674 - 46379.323: 99.6299% ( 3) 00:08:07.202 46379.323 - 46580.972: 99.6711% ( 3) 00:08:07.202 46580.972 - 46782.622: 99.7122% ( 3) 00:08:07.202 46782.622 - 46984.271: 99.7670% ( 4) 00:08:07.202 46984.271 - 47185.920: 99.8081% ( 3) 00:08:07.202 47185.920 - 47387.569: 99.8492% ( 3) 00:08:07.202 47387.569 - 47589.218: 99.8904% ( 3) 00:08:07.202 47589.218 - 47790.868: 99.9315% ( 3) 00:08:07.202 47790.868 - 47992.517: 99.9863% ( 4) 00:08:07.202 47992.517 - 48194.166: 100.0000% ( 1) 00:08:07.202 00:08:07.202 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:07.202 ============================================================================== 00:08:07.202 Range in us Cumulative IO count 00:08:07.202 11796.480 - 11846.892: 0.0136% ( 1) 00:08:07.202 11897.305 - 11947.717: 0.0272% ( 1) 00:08:07.202 11947.717 - 11998.129: 0.1087% ( 6) 00:08:07.202 11998.129 - 12048.542: 0.1766% ( 5) 00:08:07.202 12048.542 - 12098.954: 0.2310% ( 4) 00:08:07.202 12098.954 - 12149.366: 0.2853% ( 4) 00:08:07.202 12149.366 - 12199.778: 0.3397% ( 4) 00:08:07.202 12199.778 - 12250.191: 0.4484% ( 8) 00:08:07.202 12250.191 - 12300.603: 0.5435% ( 7) 00:08:07.202 12300.603 - 12351.015: 0.8016% ( 19) 00:08:07.202 12351.015 - 12401.428: 1.1141% ( 23) 00:08:07.202 12401.428 - 12451.840: 1.4674% ( 26) 00:08:07.202 12451.840 - 12502.252: 1.9293% ( 34) 00:08:07.202 12502.252 - 12552.665: 2.3641% ( 32) 00:08:07.202 12552.665 - 12603.077: 2.9348% ( 42) 00:08:07.202 12603.077 - 12653.489: 3.3696% ( 32) 00:08:07.202 12653.489 - 12703.902: 3.8179% ( 33) 00:08:07.202 12703.902 - 12754.314: 4.3886% ( 42) 00:08:07.202 12754.314 - 12804.726: 5.0679% ( 50) 00:08:07.202 12804.726 - 12855.138: 5.7473% ( 50) 00:08:07.202 12855.138 - 12905.551: 6.3723% ( 46) 00:08:07.202 12905.551 - 13006.375: 7.6902% ( 97) 00:08:07.202 13006.375 - 13107.200: 9.0897% ( 103) 00:08:07.202 13107.200 - 13208.025: 10.6929% ( 118) 00:08:07.202 13208.025 - 13308.849: 12.2826% ( 117) 00:08:07.202 13308.849 - 13409.674: 13.8315% ( 114) 00:08:07.202 13409.674 - 13510.498: 15.5163% ( 124) 00:08:07.202 13510.498 - 13611.323: 17.5000% ( 146) 00:08:07.202 13611.323 - 13712.148: 19.3342% ( 135) 00:08:07.202 13712.148 - 13812.972: 20.9783% ( 121) 00:08:07.202 13812.972 - 13913.797: 22.3641% ( 102) 00:08:07.202 13913.797 - 14014.622: 23.9266% ( 115) 00:08:07.202 14014.622 - 14115.446: 25.6522% ( 127) 00:08:07.202 14115.446 - 14216.271: 27.3913% ( 128) 00:08:07.202 14216.271 - 14317.095: 29.3207% ( 142) 00:08:07.202 14317.095 - 14417.920: 30.9239% ( 118) 00:08:07.202 14417.920 - 14518.745: 32.6223% ( 125) 00:08:07.202 14518.745 - 14619.569: 34.2120% ( 117) 00:08:07.202 14619.569 - 14720.394: 35.7609% ( 114) 00:08:07.202 14720.394 - 14821.218: 37.6087% ( 136) 00:08:07.202 14821.218 - 14922.043: 39.2663% ( 122) 00:08:07.202 14922.043 - 15022.868: 40.5299% ( 93) 00:08:07.202 15022.868 - 15123.692: 41.7799% ( 92) 00:08:07.202 15123.692 - 15224.517: 43.0027% ( 90) 00:08:07.202 15224.517 - 15325.342: 44.4022% ( 103) 00:08:07.202 15325.342 - 15426.166: 45.7065% ( 96) 00:08:07.202 15426.166 - 15526.991: 46.6168% ( 67) 00:08:07.202 15526.991 - 15627.815: 47.5543% ( 69) 00:08:07.202 15627.815 - 15728.640: 48.4918% ( 69) 00:08:07.202 15728.640 - 15829.465: 49.6196% ( 83) 00:08:07.202 15829.465 - 15930.289: 50.7337% ( 82) 00:08:07.202 15930.289 - 16031.114: 51.7663% ( 76) 00:08:07.202 16031.114 - 16131.938: 52.8940% ( 83) 00:08:07.202 16131.938 - 16232.763: 54.4293% ( 113) 00:08:07.202 16232.763 - 16333.588: 55.7201% ( 95) 00:08:07.202 16333.588 - 16434.412: 56.9429% ( 90) 00:08:07.202 16434.412 - 16535.237: 58.0842% ( 84) 00:08:07.202 16535.237 - 16636.062: 59.2935% ( 89) 00:08:07.202 16636.062 - 16736.886: 60.3261% ( 76) 00:08:07.202 16736.886 - 16837.711: 61.3995% ( 79) 00:08:07.202 16837.711 - 16938.535: 62.4592% ( 78) 00:08:07.202 16938.535 - 17039.360: 63.3424% ( 65) 00:08:07.202 17039.360 - 17140.185: 63.9130% ( 42) 00:08:07.202 17140.185 - 17241.009: 64.6332% ( 53) 00:08:07.202 17241.009 - 17341.834: 65.1766% ( 40) 00:08:07.202 17341.834 - 17442.658: 65.8696% ( 51) 00:08:07.202 17442.658 - 17543.483: 66.4130% ( 40) 00:08:07.202 17543.483 - 17644.308: 66.9022% ( 36) 00:08:07.202 17644.308 - 17745.132: 67.4592% ( 41) 00:08:07.202 17745.132 - 17845.957: 67.9348% ( 35) 00:08:07.202 17845.957 - 17946.782: 68.3424% ( 30) 00:08:07.202 17946.782 - 18047.606: 68.6957% ( 26) 00:08:07.202 18047.606 - 18148.431: 69.0353% ( 25) 00:08:07.202 18148.431 - 18249.255: 69.3478% ( 23) 00:08:07.202 18249.255 - 18350.080: 69.6196% ( 20) 00:08:07.202 18350.080 - 18450.905: 69.8641% ( 18) 00:08:07.202 18450.905 - 18551.729: 70.1359% ( 20) 00:08:07.202 18551.729 - 18652.554: 70.4755% ( 25) 00:08:07.202 18652.554 - 18753.378: 70.8424% ( 27) 00:08:07.202 18753.378 - 18854.203: 71.0462% ( 15) 00:08:07.202 18854.203 - 18955.028: 71.2092% ( 12) 00:08:07.202 18955.028 - 19055.852: 71.3315% ( 9) 00:08:07.202 19055.852 - 19156.677: 71.5625% ( 17) 00:08:07.202 19156.677 - 19257.502: 71.7527% ( 14) 00:08:07.202 19257.502 - 19358.326: 71.9022% ( 11) 00:08:07.202 19358.326 - 19459.151: 72.0109% ( 8) 00:08:07.202 19459.151 - 19559.975: 72.1332% ( 9) 00:08:07.202 19559.975 - 19660.800: 72.2283% ( 7) 00:08:07.202 19660.800 - 19761.625: 72.4185% ( 14) 00:08:07.202 19761.625 - 19862.449: 72.5815% ( 12) 00:08:07.202 19862.449 - 19963.274: 72.7446% ( 12) 00:08:07.202 19963.274 - 20064.098: 73.1250% ( 28) 00:08:07.202 20064.098 - 20164.923: 73.5190% ( 29) 00:08:07.202 20164.923 - 20265.748: 73.9402% ( 31) 00:08:07.202 20265.748 - 20366.572: 74.3207% ( 28) 00:08:07.202 20366.572 - 20467.397: 74.7418% ( 31) 00:08:07.202 20467.397 - 20568.222: 75.0815% ( 25) 00:08:07.202 20568.222 - 20669.046: 75.3533% ( 20) 00:08:07.202 20669.046 - 20769.871: 75.6386% ( 21) 00:08:07.202 20769.871 - 20870.695: 75.9375% ( 22) 00:08:07.202 20870.695 - 20971.520: 76.5761% ( 47) 00:08:07.202 20971.520 - 21072.345: 76.8750% ( 22) 00:08:07.202 21072.345 - 21173.169: 77.2690% ( 29) 00:08:07.202 21173.169 - 21273.994: 77.5951% ( 24) 00:08:07.202 21273.994 - 21374.818: 78.0842% ( 36) 00:08:07.202 21374.818 - 21475.643: 78.6413% ( 41) 00:08:07.202 21475.643 - 21576.468: 79.1848% ( 40) 00:08:07.202 21576.468 - 21677.292: 79.6739% ( 36) 00:08:07.202 21677.292 - 21778.117: 80.2582% ( 43) 00:08:07.202 21778.117 - 21878.942: 80.9511% ( 51) 00:08:07.202 21878.942 - 21979.766: 81.5217% ( 42) 00:08:07.202 21979.766 - 22080.591: 82.1739% ( 48) 00:08:07.202 22080.591 - 22181.415: 82.7582% ( 43) 00:08:07.203 22181.415 - 22282.240: 83.4103% ( 48) 00:08:07.203 22282.240 - 22383.065: 84.1033% ( 51) 00:08:07.203 22383.065 - 22483.889: 84.9321% ( 61) 00:08:07.203 22483.889 - 22584.714: 85.5842% ( 48) 00:08:07.203 22584.714 - 22685.538: 86.2228% ( 47) 00:08:07.203 22685.538 - 22786.363: 87.0788% ( 63) 00:08:07.203 22786.363 - 22887.188: 88.0299% ( 70) 00:08:07.203 22887.188 - 22988.012: 88.7908% ( 56) 00:08:07.203 22988.012 - 23088.837: 89.4837% ( 51) 00:08:07.203 23088.837 - 23189.662: 90.3261% ( 62) 00:08:07.203 23189.662 - 23290.486: 91.0734% ( 55) 00:08:07.203 23290.486 - 23391.311: 91.6984% ( 46) 00:08:07.203 23391.311 - 23492.135: 92.2418% ( 40) 00:08:07.203 23492.135 - 23592.960: 92.8125% ( 42) 00:08:07.203 23592.960 - 23693.785: 93.3016% ( 36) 00:08:07.203 23693.785 - 23794.609: 93.7364% ( 32) 00:08:07.203 23794.609 - 23895.434: 94.1576% ( 31) 00:08:07.203 23895.434 - 23996.258: 94.6875% ( 39) 00:08:07.203 23996.258 - 24097.083: 95.2174% ( 39) 00:08:07.203 24097.083 - 24197.908: 95.6386% ( 31) 00:08:07.203 24197.908 - 24298.732: 96.0462% ( 30) 00:08:07.203 24298.732 - 24399.557: 96.4402% ( 29) 00:08:07.203 24399.557 - 24500.382: 96.8207% ( 28) 00:08:07.203 24500.382 - 24601.206: 97.2011% ( 28) 00:08:07.203 24601.206 - 24702.031: 97.5000% ( 22) 00:08:07.203 24702.031 - 24802.855: 97.7853% ( 21) 00:08:07.203 24802.855 - 24903.680: 98.0163% ( 17) 00:08:07.203 24903.680 - 25004.505: 98.2201% ( 15) 00:08:07.203 25004.505 - 25105.329: 98.3832% ( 12) 00:08:07.203 25105.329 - 25206.154: 98.5326% ( 11) 00:08:07.203 25206.154 - 25306.978: 98.7092% ( 13) 00:08:07.203 25306.978 - 25407.803: 98.8451% ( 10) 00:08:07.203 25407.803 - 25508.628: 98.9402% ( 7) 00:08:07.203 25508.628 - 25609.452: 99.0217% ( 6) 00:08:07.203 25609.452 - 25710.277: 99.0761% ( 4) 00:08:07.203 25710.277 - 25811.102: 99.1033% ( 2) 00:08:07.203 25811.102 - 26012.751: 99.1304% ( 2) 00:08:07.203 29037.489 - 29239.138: 99.1440% ( 1) 00:08:07.203 29239.138 - 29440.788: 99.1848% ( 3) 00:08:07.203 29440.788 - 29642.437: 99.2255% ( 3) 00:08:07.203 29642.437 - 29844.086: 99.2663% ( 3) 00:08:07.203 29844.086 - 30045.735: 99.3071% ( 3) 00:08:07.203 30045.735 - 30247.385: 99.3478% ( 3) 00:08:07.203 30247.385 - 30449.034: 99.4022% ( 4) 00:08:07.203 30449.034 - 30650.683: 99.4429% ( 3) 00:08:07.203 30650.683 - 30852.332: 99.4837% ( 3) 00:08:07.203 30852.332 - 31053.982: 99.5245% ( 3) 00:08:07.203 31053.982 - 31255.631: 99.5652% ( 3) 00:08:07.203 31255.631 - 31457.280: 99.6196% ( 4) 00:08:07.203 31457.280 - 31658.929: 99.6603% ( 3) 00:08:07.203 31658.929 - 31860.578: 99.7011% ( 3) 00:08:07.203 31860.578 - 32062.228: 99.7418% ( 3) 00:08:07.203 32062.228 - 32263.877: 99.7826% ( 3) 00:08:07.203 32263.877 - 32465.526: 99.8234% ( 3) 00:08:07.203 32465.526 - 32667.175: 99.8777% ( 4) 00:08:07.203 32667.175 - 32868.825: 99.9185% ( 3) 00:08:07.203 32868.825 - 33070.474: 99.9592% ( 3) 00:08:07.203 33070.474 - 33272.123: 100.0000% ( 3) 00:08:07.203 00:08:07.203 10:08:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:07.203 00:08:07.203 real 0m2.746s 00:08:07.203 user 0m2.419s 00:08:07.203 sys 0m0.217s 00:08:07.203 10:08:12 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.203 ************************************ 00:08:07.203 END TEST nvme_perf 00:08:07.203 10:08:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 ************************************ 00:08:07.203 10:08:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:07.203 10:08:12 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:07.203 10:08:12 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.203 10:08:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.203 ************************************ 00:08:07.203 START TEST nvme_hello_world 00:08:07.203 ************************************ 00:08:07.203 10:08:12 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:07.461 Initializing NVMe Controllers 00:08:07.461 Attached to 0000:00:10.0 00:08:07.461 Namespace ID: 1 size: 6GB 00:08:07.461 Attached to 0000:00:11.0 00:08:07.461 Namespace ID: 1 size: 5GB 00:08:07.461 Attached to 0000:00:13.0 00:08:07.461 Namespace ID: 1 size: 1GB 00:08:07.461 Attached to 0000:00:12.0 00:08:07.461 Namespace ID: 1 size: 4GB 00:08:07.461 Namespace ID: 2 size: 4GB 00:08:07.461 Namespace ID: 3 size: 4GB 00:08:07.461 Initialization complete. 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 INFO: using host memory buffer for IO 00:08:07.461 Hello world! 00:08:07.461 ************************************ 00:08:07.461 END TEST nvme_hello_world 00:08:07.461 ************************************ 00:08:07.461 00:08:07.461 real 0m0.281s 00:08:07.461 user 0m0.107s 00:08:07.461 sys 0m0.123s 00:08:07.461 10:08:13 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.461 10:08:13 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:07.462 10:08:13 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:07.462 10:08:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.462 10:08:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.462 10:08:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.462 ************************************ 00:08:07.462 START TEST nvme_sgl 00:08:07.462 ************************************ 00:08:07.462 10:08:13 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:07.720 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:07.720 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:07.720 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:07.720 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:07.720 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:07.720 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:07.720 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:07.720 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:07.720 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:07.720 NVMe Readv/Writev Request test 00:08:07.720 Attached to 0000:00:10.0 00:08:07.720 Attached to 0000:00:11.0 00:08:07.720 Attached to 0000:00:13.0 00:08:07.720 Attached to 0000:00:12.0 00:08:07.720 0000:00:10.0: build_io_request_2 test passed 00:08:07.720 0000:00:10.0: build_io_request_4 test passed 00:08:07.720 0000:00:10.0: build_io_request_5 test passed 00:08:07.720 0000:00:10.0: build_io_request_6 test passed 00:08:07.720 0000:00:10.0: build_io_request_7 test passed 00:08:07.720 0000:00:10.0: build_io_request_10 test passed 00:08:07.720 0000:00:11.0: build_io_request_2 test passed 00:08:07.720 0000:00:11.0: build_io_request_4 test passed 00:08:07.720 0000:00:11.0: build_io_request_5 test passed 00:08:07.720 0000:00:11.0: build_io_request_6 test passed 00:08:07.720 0000:00:11.0: build_io_request_7 test passed 00:08:07.720 0000:00:11.0: build_io_request_10 test passed 00:08:07.720 Cleaning up... 00:08:07.720 00:08:07.720 real 0m0.313s 00:08:07.720 user 0m0.164s 00:08:07.720 sys 0m0.099s 00:08:07.720 ************************************ 00:08:07.720 END TEST nvme_sgl 00:08:07.720 ************************************ 00:08:07.720 10:08:13 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:07.720 10:08:13 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:07.720 10:08:13 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:07.720 10:08:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:07.720 10:08:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:07.720 10:08:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:07.720 ************************************ 00:08:07.720 START TEST nvme_e2edp 00:08:07.720 ************************************ 00:08:07.720 10:08:13 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:08.000 NVMe Write/Read with End-to-End data protection test 00:08:08.000 Attached to 0000:00:10.0 00:08:08.000 Attached to 0000:00:11.0 00:08:08.000 Attached to 0000:00:13.0 00:08:08.000 Attached to 0000:00:12.0 00:08:08.000 Cleaning up... 00:08:08.000 00:08:08.000 real 0m0.232s 00:08:08.000 user 0m0.068s 00:08:08.000 sys 0m0.114s 00:08:08.000 ************************************ 00:08:08.000 END TEST nvme_e2edp 00:08:08.000 ************************************ 00:08:08.000 10:08:13 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.000 10:08:13 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 10:08:13 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:08.000 10:08:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.000 10:08:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.000 10:08:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.000 ************************************ 00:08:08.000 START TEST nvme_reserve 00:08:08.000 ************************************ 00:08:08.000 10:08:13 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:08.261 ===================================================== 00:08:08.261 NVMe Controller at PCI bus 0, device 16, function 0 00:08:08.261 ===================================================== 00:08:08.261 Reservations: Not Supported 00:08:08.261 ===================================================== 00:08:08.261 NVMe Controller at PCI bus 0, device 17, function 0 00:08:08.261 ===================================================== 00:08:08.261 Reservations: Not Supported 00:08:08.261 ===================================================== 00:08:08.261 NVMe Controller at PCI bus 0, device 19, function 0 00:08:08.261 ===================================================== 00:08:08.261 Reservations: Not Supported 00:08:08.261 ===================================================== 00:08:08.261 NVMe Controller at PCI bus 0, device 18, function 0 00:08:08.261 ===================================================== 00:08:08.261 Reservations: Not Supported 00:08:08.261 Reservation test passed 00:08:08.261 ************************************ 00:08:08.261 END TEST nvme_reserve 00:08:08.261 ************************************ 00:08:08.261 00:08:08.261 real 0m0.199s 00:08:08.261 user 0m0.073s 00:08:08.261 sys 0m0.085s 00:08:08.261 10:08:13 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.261 10:08:13 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:08.261 10:08:13 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:08.261 10:08:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:08.261 10:08:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.261 10:08:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.261 ************************************ 00:08:08.261 START TEST nvme_err_injection 00:08:08.261 ************************************ 00:08:08.261 10:08:13 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:08.522 NVMe Error Injection test 00:08:08.522 Attached to 0000:00:10.0 00:08:08.522 Attached to 0000:00:11.0 00:08:08.522 Attached to 0000:00:13.0 00:08:08.522 Attached to 0000:00:12.0 00:08:08.522 0000:00:12.0: get features failed as expected 00:08:08.522 0000:00:10.0: get features failed as expected 00:08:08.522 0000:00:11.0: get features failed as expected 00:08:08.522 0000:00:13.0: get features failed as expected 00:08:08.522 0000:00:10.0: get features successfully as expected 00:08:08.522 0000:00:11.0: get features successfully as expected 00:08:08.522 0000:00:13.0: get features successfully as expected 00:08:08.522 0000:00:12.0: get features successfully as expected 00:08:08.522 0000:00:10.0: read failed as expected 00:08:08.522 0000:00:11.0: read failed as expected 00:08:08.522 0000:00:13.0: read failed as expected 00:08:08.522 0000:00:12.0: read failed as expected 00:08:08.522 0000:00:10.0: read successfully as expected 00:08:08.522 0000:00:11.0: read successfully as expected 00:08:08.522 0000:00:13.0: read successfully as expected 00:08:08.522 0000:00:12.0: read successfully as expected 00:08:08.522 Cleaning up... 00:08:08.522 00:08:08.522 real 0m0.219s 00:08:08.522 user 0m0.078s 00:08:08.522 sys 0m0.100s 00:08:08.522 ************************************ 00:08:08.522 END TEST nvme_err_injection 00:08:08.522 ************************************ 00:08:08.522 10:08:14 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:08.522 10:08:14 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:08.522 10:08:14 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:08.522 10:08:14 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:08:08.522 10:08:14 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:08.522 10:08:14 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.522 ************************************ 00:08:08.522 START TEST nvme_overhead 00:08:08.522 ************************************ 00:08:08.522 10:08:14 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:09.913 Initializing NVMe Controllers 00:08:09.913 Attached to 0000:00:10.0 00:08:09.913 Attached to 0000:00:11.0 00:08:09.913 Attached to 0000:00:13.0 00:08:09.913 Attached to 0000:00:12.0 00:08:09.913 Initialization complete. Launching workers. 00:08:09.913 submit (in ns) avg, min, max = 11912.0, 10447.7, 198952.3 00:08:09.913 complete (in ns) avg, min, max = 7915.4, 7168.5, 84192.3 00:08:09.913 00:08:09.913 Submit histogram 00:08:09.913 ================ 00:08:09.913 Range in us Cumulative Count 00:08:09.913 10.437 - 10.486: 0.0081% ( 1) 00:08:09.913 10.732 - 10.782: 0.0895% ( 10) 00:08:09.913 10.782 - 10.831: 0.4229% ( 41) 00:08:09.913 10.831 - 10.880: 1.2525% ( 102) 00:08:09.913 10.880 - 10.929: 3.0500% ( 221) 00:08:09.913 10.929 - 10.978: 7.1492% ( 504) 00:08:09.913 10.978 - 11.028: 13.6153% ( 795) 00:08:09.913 11.028 - 11.077: 22.6190% ( 1107) 00:08:09.913 11.077 - 11.126: 32.3627% ( 1198) 00:08:09.913 11.126 - 11.175: 42.1065% ( 1198) 00:08:09.913 11.175 - 11.225: 50.7686% ( 1065) 00:08:09.913 11.225 - 11.274: 56.3074% ( 681) 00:08:09.913 11.274 - 11.323: 60.2115% ( 480) 00:08:09.913 11.323 - 11.372: 63.4730% ( 401) 00:08:09.913 11.372 - 11.422: 65.5632% ( 257) 00:08:09.913 11.422 - 11.471: 67.4990% ( 238) 00:08:09.913 11.471 - 11.520: 69.1826% ( 207) 00:08:09.913 11.520 - 11.569: 70.4514% ( 156) 00:08:09.913 11.569 - 11.618: 71.5901% ( 140) 00:08:09.913 11.618 - 11.668: 72.7694% ( 145) 00:08:09.913 11.668 - 11.717: 73.6641% ( 110) 00:08:09.913 11.717 - 11.766: 74.6482% ( 121) 00:08:09.913 11.766 - 11.815: 76.0146% ( 168) 00:08:09.913 11.815 - 11.865: 77.6901% ( 206) 00:08:09.913 11.865 - 11.914: 79.7316% ( 251) 00:08:09.913 11.914 - 11.963: 81.7812% ( 252) 00:08:09.913 11.963 - 12.012: 83.6682% ( 232) 00:08:09.913 12.012 - 12.062: 85.3518% ( 207) 00:08:09.913 12.062 - 12.111: 87.0028% ( 203) 00:08:09.913 12.111 - 12.160: 88.2879% ( 158) 00:08:09.913 12.160 - 12.209: 89.3127% ( 126) 00:08:09.913 12.209 - 12.258: 90.0610% ( 92) 00:08:09.913 12.258 - 12.308: 90.5897% ( 65) 00:08:09.913 12.308 - 12.357: 91.0126% ( 52) 00:08:09.913 12.357 - 12.406: 91.2647% ( 31) 00:08:09.913 12.406 - 12.455: 91.4518% ( 23) 00:08:09.913 12.455 - 12.505: 91.5982% ( 18) 00:08:09.913 12.505 - 12.554: 91.7527% ( 19) 00:08:09.913 12.554 - 12.603: 91.9398% ( 23) 00:08:09.913 12.603 - 12.702: 92.1757% ( 29) 00:08:09.913 12.702 - 12.800: 92.3139% ( 17) 00:08:09.913 12.800 - 12.898: 92.4034% ( 11) 00:08:09.913 12.898 - 12.997: 92.5254% ( 15) 00:08:09.913 12.997 - 13.095: 92.6393% ( 14) 00:08:09.913 13.095 - 13.194: 92.8508% ( 26) 00:08:09.913 13.194 - 13.292: 93.1761% ( 40) 00:08:09.913 13.292 - 13.391: 93.4608% ( 35) 00:08:09.913 13.391 - 13.489: 93.7292% ( 33) 00:08:09.913 13.489 - 13.588: 93.8674% ( 17) 00:08:09.913 13.588 - 13.686: 94.0626% ( 24) 00:08:09.913 13.686 - 13.785: 94.2009% ( 17) 00:08:09.913 13.785 - 13.883: 94.3798% ( 22) 00:08:09.913 13.883 - 13.982: 94.4856% ( 13) 00:08:09.913 13.982 - 14.080: 94.6076% ( 15) 00:08:09.913 14.080 - 14.178: 94.6726% ( 8) 00:08:09.913 14.178 - 14.277: 94.8028% ( 16) 00:08:09.913 14.277 - 14.375: 94.8841% ( 10) 00:08:09.913 14.375 - 14.474: 94.9736% ( 11) 00:08:09.913 14.474 - 14.572: 95.1200% ( 18) 00:08:09.913 14.572 - 14.671: 95.1932% ( 9) 00:08:09.913 14.671 - 14.769: 95.2826% ( 11) 00:08:09.913 14.769 - 14.868: 95.3721% ( 11) 00:08:09.913 14.868 - 14.966: 95.3965% ( 3) 00:08:09.913 14.966 - 15.065: 95.4616% ( 8) 00:08:09.913 15.065 - 15.163: 95.5185% ( 7) 00:08:09.913 15.163 - 15.262: 95.6161% ( 12) 00:08:09.913 15.262 - 15.360: 95.6649% ( 6) 00:08:09.913 15.360 - 15.458: 95.7869% ( 15) 00:08:09.913 15.458 - 15.557: 95.8357% ( 6) 00:08:09.913 15.557 - 15.655: 95.8682% ( 4) 00:08:09.913 15.655 - 15.754: 95.9577% ( 11) 00:08:09.913 15.754 - 15.852: 96.0228% ( 8) 00:08:09.913 15.852 - 15.951: 96.0716% ( 6) 00:08:09.913 15.951 - 16.049: 96.1204% ( 6) 00:08:09.913 16.049 - 16.148: 96.1448% ( 3) 00:08:09.913 16.148 - 16.246: 96.1692% ( 3) 00:08:09.913 16.246 - 16.345: 96.2424% ( 9) 00:08:09.913 16.345 - 16.443: 96.3074% ( 8) 00:08:09.913 16.443 - 16.542: 96.3400% ( 4) 00:08:09.913 16.542 - 16.640: 96.4294% ( 11) 00:08:09.913 16.640 - 16.738: 96.5433% ( 14) 00:08:09.913 16.738 - 16.837: 96.5758% ( 4) 00:08:09.913 16.837 - 16.935: 96.6490% ( 9) 00:08:09.913 16.935 - 17.034: 96.6897% ( 5) 00:08:09.913 17.034 - 17.132: 96.7385% ( 6) 00:08:09.913 17.132 - 17.231: 96.8524% ( 14) 00:08:09.913 17.231 - 17.329: 96.9337% ( 10) 00:08:09.913 17.329 - 17.428: 97.0069% ( 9) 00:08:09.913 17.428 - 17.526: 97.0882% ( 10) 00:08:09.913 17.526 - 17.625: 97.1696% ( 10) 00:08:09.913 17.625 - 17.723: 97.2102% ( 5) 00:08:09.913 17.723 - 17.822: 97.2590% ( 6) 00:08:09.913 17.822 - 17.920: 97.2997% ( 5) 00:08:09.913 17.920 - 18.018: 97.3892% ( 11) 00:08:09.913 18.018 - 18.117: 97.4624% ( 9) 00:08:09.913 18.117 - 18.215: 97.4949% ( 4) 00:08:09.913 18.215 - 18.314: 97.5193% ( 3) 00:08:09.913 18.314 - 18.412: 97.5275% ( 1) 00:08:09.913 18.412 - 18.511: 97.5437% ( 2) 00:08:09.913 18.511 - 18.609: 97.5681% ( 3) 00:08:09.913 18.609 - 18.708: 97.6169% ( 6) 00:08:09.913 18.708 - 18.806: 97.6413% ( 3) 00:08:09.913 18.806 - 18.905: 97.6657% ( 3) 00:08:09.913 18.905 - 19.003: 97.6901% ( 3) 00:08:09.913 19.003 - 19.102: 97.7064% ( 2) 00:08:09.913 19.102 - 19.200: 97.7308% ( 3) 00:08:09.913 19.200 - 19.298: 97.7389% ( 1) 00:08:09.913 19.298 - 19.397: 97.7552% ( 2) 00:08:09.913 19.397 - 19.495: 97.7633% ( 1) 00:08:09.913 19.495 - 19.594: 97.7796% ( 2) 00:08:09.913 19.594 - 19.692: 97.7877% ( 1) 00:08:09.913 19.692 - 19.791: 97.8040% ( 2) 00:08:09.913 19.889 - 19.988: 97.8121% ( 1) 00:08:09.913 20.283 - 20.382: 97.8284% ( 2) 00:08:09.913 20.382 - 20.480: 97.8447% ( 2) 00:08:09.913 20.480 - 20.578: 97.8528% ( 1) 00:08:09.913 20.578 - 20.677: 97.8609% ( 1) 00:08:09.913 20.677 - 20.775: 97.8772% ( 2) 00:08:09.913 20.972 - 21.071: 97.8853% ( 1) 00:08:09.913 21.465 - 21.563: 97.8935% ( 1) 00:08:09.913 21.563 - 21.662: 97.9097% ( 2) 00:08:09.913 21.662 - 21.760: 97.9260% ( 2) 00:08:09.913 21.760 - 21.858: 97.9341% ( 1) 00:08:09.913 21.858 - 21.957: 97.9504% ( 2) 00:08:09.913 22.055 - 22.154: 97.9667% ( 2) 00:08:09.913 22.154 - 22.252: 97.9911% ( 3) 00:08:09.913 22.548 - 22.646: 97.9992% ( 1) 00:08:09.914 22.843 - 22.942: 98.0561% ( 7) 00:08:09.914 22.942 - 23.040: 98.1456% ( 11) 00:08:09.914 23.040 - 23.138: 98.2595% ( 14) 00:08:09.914 23.138 - 23.237: 98.4140% ( 19) 00:08:09.914 23.237 - 23.335: 98.6092% ( 24) 00:08:09.914 23.335 - 23.434: 98.7149% ( 13) 00:08:09.914 23.434 - 23.532: 98.7719% ( 7) 00:08:09.914 23.532 - 23.631: 98.7963% ( 3) 00:08:09.914 23.631 - 23.729: 98.8125% ( 2) 00:08:09.914 23.729 - 23.828: 98.8369% ( 3) 00:08:09.914 23.828 - 23.926: 98.8532% ( 2) 00:08:09.914 23.926 - 24.025: 98.8776% ( 3) 00:08:09.914 24.025 - 24.123: 98.9101% ( 4) 00:08:09.914 24.123 - 24.222: 98.9508% ( 5) 00:08:09.914 24.222 - 24.320: 98.9671% ( 2) 00:08:09.914 24.320 - 24.418: 98.9915% ( 3) 00:08:09.914 24.418 - 24.517: 99.0159% ( 3) 00:08:09.914 24.615 - 24.714: 99.0321% ( 2) 00:08:09.914 24.714 - 24.812: 99.0403% ( 1) 00:08:09.914 24.812 - 24.911: 99.0565% ( 2) 00:08:09.914 24.911 - 25.009: 99.0647% ( 1) 00:08:09.914 25.206 - 25.403: 99.0728% ( 1) 00:08:09.914 25.403 - 25.600: 99.0809% ( 1) 00:08:09.914 25.600 - 25.797: 99.0891% ( 1) 00:08:09.914 25.994 - 26.191: 99.1053% ( 2) 00:08:09.914 26.388 - 26.585: 99.1216% ( 2) 00:08:09.914 26.585 - 26.782: 99.1379% ( 2) 00:08:09.914 26.782 - 26.978: 99.1541% ( 2) 00:08:09.914 26.978 - 27.175: 99.1704% ( 2) 00:08:09.914 27.175 - 27.372: 99.1785% ( 1) 00:08:09.914 27.372 - 27.569: 99.1948% ( 2) 00:08:09.914 27.569 - 27.766: 99.2111% ( 2) 00:08:09.914 27.766 - 27.963: 99.3087% ( 12) 00:08:09.914 27.963 - 28.160: 99.3900% ( 10) 00:08:09.914 28.160 - 28.357: 99.5039% ( 14) 00:08:09.914 28.357 - 28.554: 99.5852% ( 10) 00:08:09.914 28.554 - 28.751: 99.6340% ( 6) 00:08:09.914 28.751 - 28.948: 99.6421% ( 1) 00:08:09.914 28.948 - 29.145: 99.6584% ( 2) 00:08:09.914 29.145 - 29.342: 99.6747% ( 2) 00:08:09.914 29.342 - 29.538: 99.6828% ( 1) 00:08:09.914 29.538 - 29.735: 99.6909% ( 1) 00:08:09.914 29.735 - 29.932: 99.7072% ( 2) 00:08:09.914 30.326 - 30.523: 99.7235% ( 2) 00:08:09.914 30.523 - 30.720: 99.7316% ( 1) 00:08:09.914 30.917 - 31.114: 99.7397% ( 1) 00:08:09.914 31.114 - 31.311: 99.7641% ( 3) 00:08:09.914 31.705 - 31.902: 99.7723% ( 1) 00:08:09.914 32.098 - 32.295: 99.7804% ( 1) 00:08:09.914 32.295 - 32.492: 99.7885% ( 1) 00:08:09.914 32.492 - 32.689: 99.7967% ( 1) 00:08:09.914 32.689 - 32.886: 99.8048% ( 1) 00:08:09.914 33.083 - 33.280: 99.8211% ( 2) 00:08:09.914 33.280 - 33.477: 99.8292% ( 1) 00:08:09.914 34.462 - 34.658: 99.8373% ( 1) 00:08:09.914 35.249 - 35.446: 99.8455% ( 1) 00:08:09.914 35.643 - 35.840: 99.8536% ( 1) 00:08:09.914 35.840 - 36.037: 99.8617% ( 1) 00:08:09.914 36.037 - 36.234: 99.8699% ( 1) 00:08:09.914 37.809 - 38.006: 99.8780% ( 1) 00:08:09.914 38.006 - 38.203: 99.8943% ( 2) 00:08:09.914 38.794 - 38.991: 99.9024% ( 1) 00:08:09.914 40.369 - 40.566: 99.9105% ( 1) 00:08:09.914 40.566 - 40.763: 99.9187% ( 1) 00:08:09.914 42.142 - 42.338: 99.9268% ( 1) 00:08:09.914 46.277 - 46.474: 99.9349% ( 1) 00:08:09.914 48.246 - 48.443: 99.9431% ( 1) 00:08:09.914 48.837 - 49.034: 99.9512% ( 1) 00:08:09.914 56.714 - 57.108: 99.9593% ( 1) 00:08:09.914 60.258 - 60.652: 99.9675% ( 1) 00:08:09.914 61.046 - 61.440: 99.9756% ( 1) 00:08:09.914 89.009 - 89.403: 99.9837% ( 1) 00:08:09.914 116.578 - 117.366: 99.9919% ( 1) 00:08:09.914 198.498 - 199.286: 100.0000% ( 1) 00:08:09.914 00:08:09.914 Complete histogram 00:08:09.914 ================== 00:08:09.914 Range in us Cumulative Count 00:08:09.914 7.138 - 7.188: 0.0081% ( 1) 00:08:09.914 7.188 - 7.237: 0.2928% ( 35) 00:08:09.914 7.237 - 7.286: 4.0179% ( 458) 00:08:09.914 7.286 - 7.335: 15.5592% ( 1419) 00:08:09.914 7.335 - 7.385: 33.0216% ( 2147) 00:08:09.914 7.385 - 7.434: 51.1346% ( 2227) 00:08:09.914 7.434 - 7.483: 64.6848% ( 1666) 00:08:09.914 7.483 - 7.532: 74.4612% ( 1202) 00:08:09.914 7.532 - 7.582: 81.2119% ( 830) 00:08:09.914 7.582 - 7.631: 85.0264% ( 469) 00:08:09.914 7.631 - 7.680: 87.5966% ( 316) 00:08:09.914 7.680 - 7.729: 89.2965% ( 209) 00:08:09.914 7.729 - 7.778: 90.2074% ( 112) 00:08:09.914 7.778 - 7.828: 90.7279% ( 64) 00:08:09.914 7.828 - 7.877: 91.1102% ( 47) 00:08:09.914 7.877 - 7.926: 91.3867% ( 34) 00:08:09.914 7.926 - 7.975: 91.6145% ( 28) 00:08:09.914 7.975 - 8.025: 91.7771% ( 20) 00:08:09.914 8.025 - 8.074: 91.9398% ( 20) 00:08:09.914 8.074 - 8.123: 92.0781% ( 17) 00:08:09.914 8.123 - 8.172: 92.1838% ( 13) 00:08:09.914 8.172 - 8.222: 92.3302% ( 18) 00:08:09.914 8.222 - 8.271: 92.4929% ( 20) 00:08:09.914 8.271 - 8.320: 92.7206% ( 28) 00:08:09.914 8.320 - 8.369: 92.9484% ( 28) 00:08:09.914 8.369 - 8.418: 93.1761% ( 28) 00:08:09.914 8.418 - 8.468: 93.3876% ( 26) 00:08:09.914 8.468 - 8.517: 93.5502% ( 20) 00:08:09.914 8.517 - 8.566: 93.6722% ( 15) 00:08:09.914 8.566 - 8.615: 93.7373% ( 8) 00:08:09.914 8.615 - 8.665: 93.7780% ( 5) 00:08:09.914 8.665 - 8.714: 93.8430% ( 8) 00:08:09.914 8.714 - 8.763: 93.8674% ( 3) 00:08:09.914 8.763 - 8.812: 93.9488% ( 10) 00:08:09.914 8.812 - 8.862: 93.9569% ( 1) 00:08:09.914 8.862 - 8.911: 94.0708% ( 14) 00:08:09.914 8.911 - 8.960: 94.2578% ( 23) 00:08:09.914 8.960 - 9.009: 94.5018% ( 30) 00:08:09.914 9.009 - 9.058: 94.6889% ( 23) 00:08:09.914 9.058 - 9.108: 94.8190% ( 16) 00:08:09.914 9.108 - 9.157: 94.9166% ( 12) 00:08:09.914 9.157 - 9.206: 95.0061% ( 11) 00:08:09.914 9.206 - 9.255: 95.0305% ( 3) 00:08:09.914 9.255 - 9.305: 95.0549% ( 3) 00:08:09.914 9.305 - 9.354: 95.0793% ( 3) 00:08:09.914 9.354 - 9.403: 95.0956% ( 2) 00:08:09.914 9.403 - 9.452: 95.1200% ( 3) 00:08:09.914 9.452 - 9.502: 95.1362% ( 2) 00:08:09.914 9.502 - 9.551: 95.1525% ( 2) 00:08:09.914 9.551 - 9.600: 95.1606% ( 1) 00:08:09.914 9.600 - 9.649: 95.1850% ( 3) 00:08:09.914 9.649 - 9.698: 95.2420% ( 7) 00:08:09.914 9.698 - 9.748: 95.2501% ( 1) 00:08:09.914 9.748 - 9.797: 95.2745% ( 3) 00:08:09.914 9.797 - 9.846: 95.2989% ( 3) 00:08:09.914 9.846 - 9.895: 95.3314% ( 4) 00:08:09.914 9.895 - 9.945: 95.3477% ( 2) 00:08:09.914 9.945 - 9.994: 95.3884% ( 5) 00:08:09.914 9.994 - 10.043: 95.4046% ( 2) 00:08:09.914 10.043 - 10.092: 95.4372% ( 4) 00:08:09.914 10.092 - 10.142: 95.4616% ( 3) 00:08:09.914 10.142 - 10.191: 95.5022% ( 5) 00:08:09.914 10.191 - 10.240: 95.5429% ( 5) 00:08:09.914 10.240 - 10.289: 95.5673% ( 3) 00:08:09.914 10.289 - 10.338: 95.6080% ( 5) 00:08:09.914 10.338 - 10.388: 95.6405% ( 4) 00:08:09.914 10.388 - 10.437: 95.6812% ( 5) 00:08:09.914 10.437 - 10.486: 95.7056% ( 3) 00:08:09.914 10.486 - 10.535: 95.7462% ( 5) 00:08:09.914 10.535 - 10.585: 95.8032% ( 7) 00:08:09.914 10.585 - 10.634: 95.8438% ( 5) 00:08:09.914 10.634 - 10.683: 95.8520% ( 1) 00:08:09.914 10.683 - 10.732: 95.8764% ( 3) 00:08:09.914 10.732 - 10.782: 95.9170% ( 5) 00:08:09.914 10.782 - 10.831: 95.9577% ( 5) 00:08:09.914 10.831 - 10.880: 95.9740% ( 2) 00:08:09.914 10.880 - 10.929: 96.0065% ( 4) 00:08:09.914 10.929 - 10.978: 96.0228% ( 2) 00:08:09.914 10.978 - 11.028: 96.0472% ( 3) 00:08:09.914 11.028 - 11.077: 96.0716% ( 3) 00:08:09.914 11.077 - 11.126: 96.0878% ( 2) 00:08:09.914 11.126 - 11.175: 96.1041% ( 2) 00:08:09.914 11.175 - 11.225: 96.1285% ( 3) 00:08:09.914 11.225 - 11.274: 96.1692% ( 5) 00:08:09.914 11.274 - 11.323: 96.1854% ( 2) 00:08:09.914 11.323 - 11.372: 96.2017% ( 2) 00:08:09.914 11.372 - 11.422: 96.2098% ( 1) 00:08:09.914 11.422 - 11.471: 96.2424% ( 4) 00:08:09.914 11.471 - 11.520: 96.2749% ( 4) 00:08:09.914 11.520 - 11.569: 96.2993% ( 3) 00:08:09.914 11.569 - 11.618: 96.3074% ( 1) 00:08:09.914 11.618 - 11.668: 96.3237% ( 2) 00:08:09.914 11.668 - 11.717: 96.3318% ( 1) 00:08:09.914 11.717 - 11.766: 96.3400% ( 1) 00:08:09.914 11.815 - 11.865: 96.3481% ( 1) 00:08:09.914 11.963 - 12.012: 96.3644% ( 2) 00:08:09.914 12.062 - 12.111: 96.3806% ( 2) 00:08:09.914 12.160 - 12.209: 96.3888% ( 1) 00:08:09.914 12.258 - 12.308: 96.3969% ( 1) 00:08:09.914 12.308 - 12.357: 96.4050% ( 1) 00:08:09.914 12.357 - 12.406: 96.4132% ( 1) 00:08:09.914 12.455 - 12.505: 96.4294% ( 2) 00:08:09.914 12.505 - 12.554: 96.4376% ( 1) 00:08:09.914 12.603 - 12.702: 96.4538% ( 2) 00:08:09.914 12.702 - 12.800: 96.4620% ( 1) 00:08:09.914 12.800 - 12.898: 96.5189% ( 7) 00:08:09.914 12.898 - 12.997: 96.5596% ( 5) 00:08:09.914 12.997 - 13.095: 96.6084% ( 6) 00:08:09.914 13.095 - 13.194: 96.6816% ( 9) 00:08:09.914 13.194 - 13.292: 96.8117% ( 16) 00:08:09.914 13.292 - 13.391: 96.9093% ( 12) 00:08:09.915 13.391 - 13.489: 96.9662% ( 7) 00:08:09.915 13.489 - 13.588: 97.0720% ( 13) 00:08:09.915 13.588 - 13.686: 97.1614% ( 11) 00:08:09.915 13.686 - 13.785: 97.2346% ( 9) 00:08:09.915 13.785 - 13.883: 97.3404% ( 13) 00:08:09.915 13.883 - 13.982: 97.3810% ( 5) 00:08:09.915 13.982 - 14.080: 97.4298% ( 6) 00:08:09.915 14.080 - 14.178: 97.4624% ( 4) 00:08:09.915 14.178 - 14.277: 97.4786% ( 2) 00:08:09.915 14.277 - 14.375: 97.5031% ( 3) 00:08:09.915 14.375 - 14.474: 97.5356% ( 4) 00:08:09.915 14.474 - 14.572: 97.5600% ( 3) 00:08:09.915 14.572 - 14.671: 97.6007% ( 5) 00:08:09.915 14.671 - 14.769: 97.6413% ( 5) 00:08:09.915 14.769 - 14.868: 97.6739% ( 4) 00:08:09.915 14.868 - 14.966: 97.6901% ( 2) 00:08:09.915 15.065 - 15.163: 97.7308% ( 5) 00:08:09.915 15.163 - 15.262: 97.7389% ( 1) 00:08:09.915 15.262 - 15.360: 97.7552% ( 2) 00:08:09.915 15.360 - 15.458: 97.7633% ( 1) 00:08:09.915 15.557 - 15.655: 97.7715% ( 1) 00:08:09.915 15.655 - 15.754: 97.7796% ( 1) 00:08:09.915 15.754 - 15.852: 97.7877% ( 1) 00:08:09.915 15.852 - 15.951: 97.7959% ( 1) 00:08:09.915 16.148 - 16.246: 97.8365% ( 5) 00:08:09.915 16.246 - 16.345: 97.9504% ( 14) 00:08:09.915 16.345 - 16.443: 98.0724% ( 15) 00:08:09.915 16.443 - 16.542: 98.3652% ( 36) 00:08:09.915 16.542 - 16.640: 98.4709% ( 13) 00:08:09.915 16.640 - 16.738: 98.5848% ( 14) 00:08:09.915 16.738 - 16.837: 98.6336% ( 6) 00:08:09.915 16.837 - 16.935: 98.6743% ( 5) 00:08:09.915 16.935 - 17.034: 98.6987% ( 3) 00:08:09.915 17.231 - 17.329: 98.7475% ( 6) 00:08:09.915 17.329 - 17.428: 98.7637% ( 2) 00:08:09.915 17.428 - 17.526: 98.7719% ( 1) 00:08:09.915 17.526 - 17.625: 98.7963% ( 3) 00:08:09.915 17.625 - 17.723: 98.8125% ( 2) 00:08:09.915 17.723 - 17.822: 98.8207% ( 1) 00:08:09.915 17.822 - 17.920: 98.8288% ( 1) 00:08:09.915 17.920 - 18.018: 98.8369% ( 1) 00:08:09.915 18.018 - 18.117: 98.8532% ( 2) 00:08:09.915 18.117 - 18.215: 98.8939% ( 5) 00:08:09.915 18.215 - 18.314: 98.9020% ( 1) 00:08:09.915 18.314 - 18.412: 98.9183% ( 2) 00:08:09.915 18.412 - 18.511: 98.9345% ( 2) 00:08:09.915 18.511 - 18.609: 98.9427% ( 1) 00:08:09.915 18.609 - 18.708: 98.9671% ( 3) 00:08:09.915 19.003 - 19.102: 98.9915% ( 3) 00:08:09.915 19.298 - 19.397: 99.0077% ( 2) 00:08:09.915 19.495 - 19.594: 99.0321% ( 3) 00:08:09.915 19.594 - 19.692: 99.0565% ( 3) 00:08:09.915 19.692 - 19.791: 99.1216% ( 8) 00:08:09.915 19.791 - 19.889: 99.1867% ( 8) 00:08:09.915 19.889 - 19.988: 99.2355% ( 6) 00:08:09.915 19.988 - 20.086: 99.3331% ( 12) 00:08:09.915 20.086 - 20.185: 99.3819% ( 6) 00:08:09.915 20.185 - 20.283: 99.4144% ( 4) 00:08:09.915 20.283 - 20.382: 99.4632% ( 6) 00:08:09.915 20.382 - 20.480: 99.4876% ( 3) 00:08:09.915 20.578 - 20.677: 99.5039% ( 2) 00:08:09.915 20.775 - 20.874: 99.5120% ( 1) 00:08:09.915 20.874 - 20.972: 99.5283% ( 2) 00:08:09.915 21.071 - 21.169: 99.5527% ( 3) 00:08:09.915 21.169 - 21.268: 99.5689% ( 2) 00:08:09.915 21.268 - 21.366: 99.5771% ( 1) 00:08:09.915 21.662 - 21.760: 99.6015% ( 3) 00:08:09.915 21.760 - 21.858: 99.6096% ( 1) 00:08:09.915 22.351 - 22.449: 99.6177% ( 1) 00:08:09.915 22.548 - 22.646: 99.6340% ( 2) 00:08:09.915 22.745 - 22.843: 99.6421% ( 1) 00:08:09.915 23.631 - 23.729: 99.6503% ( 1) 00:08:09.915 23.729 - 23.828: 99.6584% ( 1) 00:08:09.915 23.828 - 23.926: 99.6665% ( 1) 00:08:09.915 24.025 - 24.123: 99.6828% ( 2) 00:08:09.915 24.320 - 24.418: 99.6991% ( 2) 00:08:09.915 24.418 - 24.517: 99.7072% ( 1) 00:08:09.915 24.714 - 24.812: 99.7153% ( 1) 00:08:09.915 25.206 - 25.403: 99.7235% ( 1) 00:08:09.915 25.600 - 25.797: 99.7316% ( 1) 00:08:09.915 26.388 - 26.585: 99.7397% ( 1) 00:08:09.915 26.585 - 26.782: 99.7479% ( 1) 00:08:09.915 26.782 - 26.978: 99.7560% ( 1) 00:08:09.915 27.372 - 27.569: 99.7641% ( 1) 00:08:09.915 28.160 - 28.357: 99.7723% ( 1) 00:08:09.915 29.538 - 29.735: 99.7804% ( 1) 00:08:09.915 30.523 - 30.720: 99.7885% ( 1) 00:08:09.915 31.902 - 32.098: 99.7967% ( 1) 00:08:09.915 33.477 - 33.674: 99.8048% ( 1) 00:08:09.915 34.265 - 34.462: 99.8129% ( 1) 00:08:09.915 34.462 - 34.658: 99.8292% ( 2) 00:08:09.915 34.658 - 34.855: 99.8455% ( 2) 00:08:09.915 35.249 - 35.446: 99.8536% ( 1) 00:08:09.915 36.825 - 37.022: 99.8780% ( 3) 00:08:09.915 37.809 - 38.006: 99.8861% ( 1) 00:08:09.915 38.597 - 38.794: 99.9105% ( 3) 00:08:09.915 40.763 - 40.960: 99.9187% ( 1) 00:08:09.915 46.277 - 46.474: 99.9268% ( 1) 00:08:09.915 46.868 - 47.065: 99.9431% ( 2) 00:08:09.915 49.822 - 50.018: 99.9512% ( 1) 00:08:09.915 63.409 - 63.803: 99.9593% ( 1) 00:08:09.915 63.803 - 64.197: 99.9675% ( 1) 00:08:09.915 77.982 - 78.375: 99.9756% ( 1) 00:08:09.915 78.375 - 78.769: 99.9837% ( 1) 00:08:09.915 79.951 - 80.345: 99.9919% ( 1) 00:08:09.915 83.889 - 84.283: 100.0000% ( 1) 00:08:09.915 00:08:09.915 ************************************ 00:08:09.915 END TEST nvme_overhead 00:08:09.915 ************************************ 00:08:09.915 00:08:09.915 real 0m1.242s 00:08:09.915 user 0m1.080s 00:08:09.915 sys 0m0.110s 00:08:09.915 10:08:15 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.915 10:08:15 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:09.915 10:08:15 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:09.915 10:08:15 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:09.915 10:08:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.915 10:08:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.915 ************************************ 00:08:09.915 START TEST nvme_arbitration 00:08:09.915 ************************************ 00:08:09.915 10:08:15 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:13.217 Initializing NVMe Controllers 00:08:13.217 Attached to 0000:00:10.0 00:08:13.217 Attached to 0000:00:11.0 00:08:13.217 Attached to 0000:00:13.0 00:08:13.217 Attached to 0000:00:12.0 00:08:13.217 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:13.217 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:13.217 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:13.217 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:13.217 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:13.217 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:13.217 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:13.217 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:13.217 Initialization complete. Launching workers. 00:08:13.217 Starting thread on core 1 with urgent priority queue 00:08:13.217 Starting thread on core 2 with urgent priority queue 00:08:13.217 Starting thread on core 3 with urgent priority queue 00:08:13.217 Starting thread on core 0 with urgent priority queue 00:08:13.217 QEMU NVMe Ctrl (12340 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:08:13.217 QEMU NVMe Ctrl (12342 ) core 0: 938.67 IO/s 106.53 secs/100000 ios 00:08:13.217 QEMU NVMe Ctrl (12341 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:13.217 QEMU NVMe Ctrl (12342 ) core 1: 874.67 IO/s 114.33 secs/100000 ios 00:08:13.217 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:08:13.217 QEMU NVMe Ctrl (12342 ) core 3: 789.33 IO/s 126.69 secs/100000 ios 00:08:13.217 ======================================================== 00:08:13.217 00:08:13.217 00:08:13.217 real 0m3.293s 00:08:13.217 user 0m9.200s 00:08:13.217 sys 0m0.118s 00:08:13.217 10:08:18 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.217 ************************************ 00:08:13.217 END TEST nvme_arbitration 00:08:13.217 ************************************ 00:08:13.217 10:08:18 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 10:08:18 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:13.217 10:08:18 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:13.217 10:08:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.217 10:08:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.217 ************************************ 00:08:13.217 START TEST nvme_single_aen 00:08:13.217 ************************************ 00:08:13.217 10:08:18 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:13.478 Asynchronous Event Request test 00:08:13.478 Attached to 0000:00:10.0 00:08:13.478 Attached to 0000:00:11.0 00:08:13.478 Attached to 0000:00:13.0 00:08:13.478 Attached to 0000:00:12.0 00:08:13.478 Reset controller to setup AER completions for this process 00:08:13.478 Registering asynchronous event callbacks... 00:08:13.478 Getting orig temperature thresholds of all controllers 00:08:13.478 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.478 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.478 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.478 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.478 Setting all controllers temperature threshold low to trigger AER 00:08:13.478 Waiting for all controllers temperature threshold to be set lower 00:08:13.478 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.478 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:13.478 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.478 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:13.478 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.478 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:13.478 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.478 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:13.478 Waiting for all controllers to trigger AER and reset threshold 00:08:13.478 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.478 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.478 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.478 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.478 Cleaning up... 00:08:13.478 ************************************ 00:08:13.478 END TEST nvme_single_aen 00:08:13.478 ************************************ 00:08:13.478 00:08:13.478 real 0m0.203s 00:08:13.478 user 0m0.069s 00:08:13.478 sys 0m0.098s 00:08:13.478 10:08:18 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.478 10:08:18 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:13.478 10:08:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:13.478 10:08:19 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.478 10:08:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.478 10:08:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.478 ************************************ 00:08:13.478 START TEST nvme_doorbell_aers 00:08:13.478 ************************************ 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:13.478 10:08:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:13.739 [2024-11-04 10:08:19.291591] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:23.705 Executing: test_write_invalid_db 00:08:23.705 Waiting for AER completion... 00:08:23.705 Failure: test_write_invalid_db 00:08:23.705 00:08:23.705 Executing: test_invalid_db_write_overflow_sq 00:08:23.705 Waiting for AER completion... 00:08:23.705 Failure: test_invalid_db_write_overflow_sq 00:08:23.705 00:08:23.705 Executing: test_invalid_db_write_overflow_cq 00:08:23.705 Waiting for AER completion... 00:08:23.705 Failure: test_invalid_db_write_overflow_cq 00:08:23.705 00:08:23.706 10:08:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:23.706 10:08:29 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:23.706 [2024-11-04 10:08:29.328568] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:33.733 Executing: test_write_invalid_db 00:08:33.733 Waiting for AER completion... 00:08:33.733 Failure: test_write_invalid_db 00:08:33.733 00:08:33.733 Executing: test_invalid_db_write_overflow_sq 00:08:33.733 Waiting for AER completion... 00:08:33.733 Failure: test_invalid_db_write_overflow_sq 00:08:33.733 00:08:33.733 Executing: test_invalid_db_write_overflow_cq 00:08:33.733 Waiting for AER completion... 00:08:33.733 Failure: test_invalid_db_write_overflow_cq 00:08:33.733 00:08:33.733 10:08:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:33.733 10:08:39 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:33.733 [2024-11-04 10:08:39.341684] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:43.790 Executing: test_write_invalid_db 00:08:43.790 Waiting for AER completion... 00:08:43.790 Failure: test_write_invalid_db 00:08:43.790 00:08:43.791 Executing: test_invalid_db_write_overflow_sq 00:08:43.791 Waiting for AER completion... 00:08:43.791 Failure: test_invalid_db_write_overflow_sq 00:08:43.791 00:08:43.791 Executing: test_invalid_db_write_overflow_cq 00:08:43.791 Waiting for AER completion... 00:08:43.791 Failure: test_invalid_db_write_overflow_cq 00:08:43.791 00:08:43.791 10:08:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:43.791 10:08:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:43.791 [2024-11-04 10:08:49.365412] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 Executing: test_write_invalid_db 00:08:53.828 Waiting for AER completion... 00:08:53.828 Failure: test_write_invalid_db 00:08:53.828 00:08:53.828 Executing: test_invalid_db_write_overflow_sq 00:08:53.828 Waiting for AER completion... 00:08:53.828 Failure: test_invalid_db_write_overflow_sq 00:08:53.828 00:08:53.828 Executing: test_invalid_db_write_overflow_cq 00:08:53.828 Waiting for AER completion... 00:08:53.828 Failure: test_invalid_db_write_overflow_cq 00:08:53.828 00:08:53.828 00:08:53.828 real 0m40.184s 00:08:53.828 user 0m34.085s 00:08:53.828 sys 0m5.738s 00:08:53.828 10:08:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.828 ************************************ 00:08:53.828 10:08:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:53.828 END TEST nvme_doorbell_aers 00:08:53.828 ************************************ 00:08:53.828 10:08:59 nvme -- nvme/nvme.sh@97 -- # uname 00:08:53.828 10:08:59 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:53.828 10:08:59 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:53.828 10:08:59 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:53.828 10:08:59 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.828 10:08:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:53.828 ************************************ 00:08:53.828 START TEST nvme_multi_aen 00:08:53.828 ************************************ 00:08:53.828 10:08:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:53.828 [2024-11-04 10:08:59.453886] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.454082] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.454097] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.455408] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.455439] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.455447] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.456509] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.456533] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.456541] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.457521] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.457618] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 [2024-11-04 10:08:59.457628] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63264) is not found. Dropping the request. 00:08:53.828 Child process pid: 63779 00:08:54.088 [Child] Asynchronous Event Request test 00:08:54.088 [Child] Attached to 0000:00:10.0 00:08:54.088 [Child] Attached to 0000:00:11.0 00:08:54.088 [Child] Attached to 0000:00:13.0 00:08:54.088 [Child] Attached to 0000:00:12.0 00:08:54.088 [Child] Registering asynchronous event callbacks... 00:08:54.088 [Child] Getting orig temperature thresholds of all controllers 00:08:54.088 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:54.088 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 [Child] Cleaning up... 00:08:54.088 Asynchronous Event Request test 00:08:54.088 Attached to 0000:00:10.0 00:08:54.088 Attached to 0000:00:11.0 00:08:54.088 Attached to 0000:00:13.0 00:08:54.088 Attached to 0000:00:12.0 00:08:54.088 Reset controller to setup AER completions for this process 00:08:54.088 Registering asynchronous event callbacks... 00:08:54.088 Getting orig temperature thresholds of all controllers 00:08:54.088 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:54.088 Setting all controllers temperature threshold low to trigger AER 00:08:54.088 Waiting for all controllers temperature threshold to be set lower 00:08:54.088 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:54.088 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:54.088 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:54.088 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:54.088 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:54.088 Waiting for all controllers to trigger AER and reset threshold 00:08:54.088 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:54.088 Cleaning up... 00:08:54.088 00:08:54.088 real 0m0.470s 00:08:54.088 user 0m0.148s 00:08:54.088 sys 0m0.209s 00:08:54.088 10:08:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.088 10:08:59 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 ************************************ 00:08:54.088 END TEST nvme_multi_aen 00:08:54.088 ************************************ 00:08:54.088 10:08:59 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:54.088 10:08:59 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:54.088 10:08:59 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.088 10:08:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 ************************************ 00:08:54.088 START TEST nvme_startup 00:08:54.088 ************************************ 00:08:54.088 10:08:59 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:54.349 Initializing NVMe Controllers 00:08:54.349 Attached to 0000:00:10.0 00:08:54.349 Attached to 0000:00:11.0 00:08:54.349 Attached to 0000:00:13.0 00:08:54.349 Attached to 0000:00:12.0 00:08:54.349 Initialization complete. 00:08:54.349 Time used:162676.719 (us). 00:08:54.349 00:08:54.349 real 0m0.241s 00:08:54.349 user 0m0.088s 00:08:54.349 sys 0m0.108s 00:08:54.349 10:08:59 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.349 10:08:59 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 ************************************ 00:08:54.349 END TEST nvme_startup 00:08:54.349 ************************************ 00:08:54.349 10:09:00 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:54.349 10:09:00 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:54.349 10:09:00 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.349 10:09:00 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:54.349 ************************************ 00:08:54.349 START TEST nvme_multi_secondary 00:08:54.349 ************************************ 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63835 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63836 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:54.349 10:09:00 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:57.647 Initializing NVMe Controllers 00:08:57.647 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.647 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.647 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.647 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.647 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:57.647 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:57.647 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:57.647 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:57.647 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:57.647 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:57.647 Initialization complete. Launching workers. 00:08:57.647 ======================================================== 00:08:57.647 Latency(us) 00:08:57.647 Device Information : IOPS MiB/s Average min max 00:08:57.647 PCIE (0000:00:10.0) NSID 1 from core 1: 7608.23 29.72 2101.31 766.02 7954.96 00:08:57.647 PCIE (0000:00:11.0) NSID 1 from core 1: 7608.23 29.72 2102.54 794.55 8032.63 00:08:57.647 PCIE (0000:00:13.0) NSID 1 from core 1: 7608.23 29.72 2102.52 775.45 7873.39 00:08:57.647 PCIE (0000:00:12.0) NSID 1 from core 1: 7608.23 29.72 2102.52 796.14 8056.17 00:08:57.647 PCIE (0000:00:12.0) NSID 2 from core 1: 7608.23 29.72 2102.52 791.84 8094.85 00:08:57.647 PCIE (0000:00:12.0) NSID 3 from core 1: 7608.23 29.72 2102.64 785.31 7877.75 00:08:57.647 ======================================================== 00:08:57.647 Total : 45649.36 178.32 2102.34 766.02 8094.85 00:08:57.647 00:08:57.905 Initializing NVMe Controllers 00:08:57.905 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:57.905 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:57.906 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:57.906 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:57.906 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:57.906 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:57.906 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:57.906 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:57.906 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:57.906 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:57.906 Initialization complete. Launching workers. 00:08:57.906 ======================================================== 00:08:57.906 Latency(us) 00:08:57.906 Device Information : IOPS MiB/s Average min max 00:08:57.906 PCIE (0000:00:10.0) NSID 1 from core 2: 3476.07 13.58 4600.77 1046.74 20506.44 00:08:57.906 PCIE (0000:00:11.0) NSID 1 from core 2: 3476.07 13.58 4602.50 1234.36 16381.19 00:08:57.906 PCIE (0000:00:13.0) NSID 1 from core 2: 3476.07 13.58 4602.05 1285.90 16594.42 00:08:57.906 PCIE (0000:00:12.0) NSID 1 from core 2: 3476.07 13.58 4602.34 1181.20 17157.53 00:08:57.906 PCIE (0000:00:12.0) NSID 2 from core 2: 3476.07 13.58 4601.88 983.01 20611.73 00:08:57.906 PCIE (0000:00:12.0) NSID 3 from core 2: 3476.07 13.58 4602.34 1059.63 20492.67 00:08:57.906 ======================================================== 00:08:57.906 Total : 20856.42 81.47 4601.98 983.01 20611.73 00:08:57.906 00:08:57.906 10:09:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63835 00:08:59.811 Initializing NVMe Controllers 00:08:59.811 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.811 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.811 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.811 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.811 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:59.811 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:59.811 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:59.811 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:59.811 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:59.811 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:59.811 Initialization complete. Launching workers. 00:08:59.811 ======================================================== 00:08:59.811 Latency(us) 00:08:59.811 Device Information : IOPS MiB/s Average min max 00:08:59.811 PCIE (0000:00:10.0) NSID 1 from core 0: 11311.15 44.18 1413.29 666.65 10940.75 00:08:59.811 PCIE (0000:00:11.0) NSID 1 from core 0: 11314.35 44.20 1413.73 681.75 10795.61 00:08:59.812 PCIE (0000:00:13.0) NSID 1 from core 0: 11310.55 44.18 1414.16 684.51 11784.23 00:08:59.812 PCIE (0000:00:12.0) NSID 1 from core 0: 11311.15 44.18 1414.04 674.01 11855.16 00:08:59.812 PCIE (0000:00:12.0) NSID 2 from core 0: 11310.15 44.18 1414.12 673.41 11439.96 00:08:59.812 PCIE (0000:00:12.0) NSID 3 from core 0: 11311.15 44.18 1413.95 682.04 11253.78 00:08:59.812 ======================================================== 00:08:59.812 Total : 67868.48 265.11 1413.88 666.65 11855.16 00:08:59.812 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63836 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63905 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63906 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:59.812 10:09:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:03.160 Initializing NVMe Controllers 00:09:03.160 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.160 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:03.160 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:03.160 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:03.160 Initialization complete. Launching workers. 00:09:03.160 ======================================================== 00:09:03.160 Latency(us) 00:09:03.160 Device Information : IOPS MiB/s Average min max 00:09:03.160 PCIE (0000:00:10.0) NSID 1 from core 0: 7606.65 29.71 2101.95 710.03 11289.77 00:09:03.160 PCIE (0000:00:11.0) NSID 1 from core 0: 7606.65 29.71 2103.01 729.22 11358.08 00:09:03.160 PCIE (0000:00:13.0) NSID 1 from core 0: 7606.65 29.71 2103.00 710.30 9121.61 00:09:03.160 PCIE (0000:00:12.0) NSID 1 from core 0: 7606.65 29.71 2102.99 752.30 10226.96 00:09:03.160 PCIE (0000:00:12.0) NSID 2 from core 0: 7606.65 29.71 2102.95 746.78 10573.31 00:09:03.160 PCIE (0000:00:12.0) NSID 3 from core 0: 7606.65 29.71 2103.00 741.02 11188.84 00:09:03.160 ======================================================== 00:09:03.160 Total : 45639.89 178.28 2102.81 710.03 11358.08 00:09:03.160 00:09:03.160 Initializing NVMe Controllers 00:09:03.160 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:03.160 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:03.160 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:03.160 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:03.160 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:03.160 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:03.160 Initialization complete. Launching workers. 00:09:03.160 ======================================================== 00:09:03.160 Latency(us) 00:09:03.160 Device Information : IOPS MiB/s Average min max 00:09:03.160 PCIE (0000:00:10.0) NSID 1 from core 1: 7918.31 30.93 2019.28 719.82 6723.59 00:09:03.160 PCIE (0000:00:11.0) NSID 1 from core 1: 7918.31 30.93 2020.34 744.59 6205.25 00:09:03.160 PCIE (0000:00:13.0) NSID 1 from core 1: 7918.31 30.93 2020.38 734.52 6451.50 00:09:03.160 PCIE (0000:00:12.0) NSID 1 from core 1: 7918.31 30.93 2020.44 733.79 6201.51 00:09:03.160 PCIE (0000:00:12.0) NSID 2 from core 1: 7918.31 30.93 2020.52 737.10 6220.13 00:09:03.160 PCIE (0000:00:12.0) NSID 3 from core 1: 7918.31 30.93 2020.58 732.64 5997.79 00:09:03.160 ======================================================== 00:09:03.160 Total : 47509.83 185.59 2020.26 719.82 6723.59 00:09:03.160 00:09:05.076 Initializing NVMe Controllers 00:09:05.076 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:05.076 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:05.076 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:05.076 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:05.076 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:05.076 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:05.076 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:05.076 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:05.076 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:05.076 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:05.076 Initialization complete. Launching workers. 00:09:05.076 ======================================================== 00:09:05.076 Latency(us) 00:09:05.076 Device Information : IOPS MiB/s Average min max 00:09:05.076 PCIE (0000:00:10.0) NSID 1 from core 2: 4630.70 18.09 3452.45 733.41 12306.54 00:09:05.076 PCIE (0000:00:11.0) NSID 1 from core 2: 4633.89 18.10 3452.30 752.71 12566.31 00:09:05.076 PCIE (0000:00:13.0) NSID 1 from core 2: 4633.89 18.10 3452.30 760.03 12532.84 00:09:05.076 PCIE (0000:00:12.0) NSID 1 from core 2: 4633.89 18.10 3452.12 757.15 12276.84 00:09:05.076 PCIE (0000:00:12.0) NSID 2 from core 2: 4633.89 18.10 3452.29 758.13 12941.33 00:09:05.076 PCIE (0000:00:12.0) NSID 3 from core 2: 4633.89 18.10 3452.10 752.79 12544.50 00:09:05.076 ======================================================== 00:09:05.076 Total : 27800.16 108.59 3452.26 733.41 12941.33 00:09:05.076 00:09:05.076 ************************************ 00:09:05.076 END TEST nvme_multi_secondary 00:09:05.076 ************************************ 00:09:05.076 10:09:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63905 00:09:05.076 10:09:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63906 00:09:05.076 00:09:05.076 real 0m10.643s 00:09:05.076 user 0m18.412s 00:09:05.076 sys 0m0.621s 00:09:05.076 10:09:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:05.076 10:09:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:05.076 10:09:10 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:05.076 10:09:10 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:05.076 10:09:10 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62862 ]] 00:09:05.076 10:09:10 nvme -- common/autotest_common.sh@1092 -- # kill 62862 00:09:05.076 10:09:10 nvme -- common/autotest_common.sh@1093 -- # wait 62862 00:09:05.076 [2024-11-04 10:09:10.703872] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.703946] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.703976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.703995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.706242] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.706293] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.706311] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.706329] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.708519] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.708715] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.708735] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.708752] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.710914] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.710959] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.710976] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.076 [2024-11-04 10:09:10.710995] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63778) is not found. Dropping the request. 00:09:05.338 [2024-11-04 10:09:10.822943] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:09:05.338 10:09:10 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:09:05.338 10:09:10 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:09:05.338 10:09:10 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:05.338 10:09:10 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:05.338 10:09:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:05.338 10:09:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:05.338 ************************************ 00:09:05.338 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:05.338 ************************************ 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:05.338 * Looking for test storage... 00:09:05.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:05.338 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:05.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.339 --rc genhtml_branch_coverage=1 00:09:05.339 --rc genhtml_function_coverage=1 00:09:05.339 --rc genhtml_legend=1 00:09:05.339 --rc geninfo_all_blocks=1 00:09:05.339 --rc geninfo_unexecuted_blocks=1 00:09:05.339 00:09:05.339 ' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:05.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.339 --rc genhtml_branch_coverage=1 00:09:05.339 --rc genhtml_function_coverage=1 00:09:05.339 --rc genhtml_legend=1 00:09:05.339 --rc geninfo_all_blocks=1 00:09:05.339 --rc geninfo_unexecuted_blocks=1 00:09:05.339 00:09:05.339 ' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:05.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.339 --rc genhtml_branch_coverage=1 00:09:05.339 --rc genhtml_function_coverage=1 00:09:05.339 --rc genhtml_legend=1 00:09:05.339 --rc geninfo_all_blocks=1 00:09:05.339 --rc geninfo_unexecuted_blocks=1 00:09:05.339 00:09:05.339 ' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:05.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:05.339 --rc genhtml_branch_coverage=1 00:09:05.339 --rc genhtml_function_coverage=1 00:09:05.339 --rc genhtml_legend=1 00:09:05.339 --rc geninfo_all_blocks=1 00:09:05.339 --rc geninfo_unexecuted_blocks=1 00:09:05.339 00:09:05.339 ' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:05.339 10:09:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:05.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64074 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64074 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 64074 ']' 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:05.339 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:05.599 [2024-11-04 10:09:11.106907] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:09:05.599 [2024-11-04 10:09:11.107027] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64074 ] 00:09:05.599 [2024-11-04 10:09:11.276611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.860 [2024-11-04 10:09:11.384988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.860 [2024-11-04 10:09:11.385073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.860 [2024-11-04 10:09:11.385546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.860 [2024-11-04 10:09:11.385561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.429 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.429 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:09:06.429 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:06.429 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.429 10:09:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.429 nvme0n1 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_qnYZA.txt 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.429 true 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730714952 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64097 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:06.429 10:09:12 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:08.342 [2024-11-04 10:09:14.077598] nvme_ctrlr.c:1714:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:08.342 [2024-11-04 10:09:14.077988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:08.342 [2024-11-04 10:09:14.078022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:08.342 [2024-11-04 10:09:14.078036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:08.342 [2024-11-04 10:09:14.079648] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:08.342 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64097 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64097 00:09:08.342 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64097 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_qnYZA.txt 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_qnYZA.txt 00:09:08.603 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64074 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 64074 ']' 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 64074 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 64074 00:09:08.604 killing process with pid 64074 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 64074' 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 64074 00:09:08.604 10:09:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 64074 00:09:09.992 10:09:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:09.992 10:09:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:09.992 00:09:09.992 real 0m4.858s 00:09:09.992 user 0m17.186s 00:09:09.992 sys 0m0.505s 00:09:09.992 10:09:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.992 ************************************ 00:09:09.992 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:09.992 ************************************ 00:09:09.992 10:09:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:09.992 10:09:15 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:09.992 10:09:15 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:09.992 10:09:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:09.992 10:09:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.992 10:09:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:10.254 ************************************ 00:09:10.254 START TEST nvme_fio 00:09:10.254 ************************************ 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:10.254 10:09:15 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:10.254 10:09:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:10.514 10:09:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:10.514 10:09:16 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:10.514 10:09:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:10.514 10:09:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:10.514 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:10.775 10:09:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:10.775 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:10.775 fio-3.35 00:09:10.775 Starting 1 thread 00:09:17.351 00:09:17.351 test: (groupid=0, jobs=1): err= 0: pid=64232: Mon Nov 4 10:09:22 2024 00:09:17.351 read: IOPS=23.1k, BW=90.3MiB/s (94.6MB/s)(181MiB/2001msec) 00:09:17.351 slat (usec): min=3, max=133, avg= 5.00, stdev= 2.09 00:09:17.351 clat (usec): min=222, max=9419, avg=2758.79, stdev=780.64 00:09:17.351 lat (usec): min=226, max=9429, avg=2763.78, stdev=781.76 00:09:17.351 clat percentiles (usec): 00:09:17.351 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2409], 00:09:17.351 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:17.351 | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3556], 95.00th=[ 4555], 00:09:17.351 | 99.00th=[ 5997], 99.50th=[ 6259], 99.90th=[ 7635], 99.95th=[ 8717], 00:09:17.351 | 99.99th=[ 9372] 00:09:17.351 bw ( KiB/s): min=84272, max=95680, per=98.37%, avg=90919.33, stdev=5933.40, samples=3 00:09:17.351 iops : min=21068, max=23920, avg=22729.67, stdev=1483.27, samples=3 00:09:17.351 write: IOPS=23.0k, BW=89.7MiB/s (94.1MB/s)(180MiB/2001msec); 0 zone resets 00:09:17.351 slat (nsec): min=3469, max=52087, avg=5209.10, stdev=1997.40 00:09:17.351 clat (usec): min=279, max=9391, avg=2769.18, stdev=793.77 00:09:17.351 lat (usec): min=284, max=9403, avg=2774.39, stdev=794.85 00:09:17.351 clat percentiles (usec): 00:09:17.351 | 1.00th=[ 1713], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2409], 00:09:17.351 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:17.351 | 70.00th=[ 2671], 80.00th=[ 2835], 90.00th=[ 3556], 95.00th=[ 4555], 00:09:17.351 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 8356], 99.95th=[ 8979], 00:09:17.351 | 99.99th=[ 9372] 00:09:17.351 bw ( KiB/s): min=84184, max=96672, per=99.15%, avg=91106.33, stdev=6353.58, samples=3 00:09:17.352 iops : min=21046, max=24168, avg=22776.33, stdev=1588.31, samples=3 00:09:17.352 lat (usec) : 250=0.01%, 500=0.02%, 750=0.02%, 1000=0.05% 00:09:17.352 lat (msec) : 2=2.85%, 4=88.93%, 10=8.14% 00:09:17.352 cpu : usr=99.30%, sys=0.00%, ctx=2, majf=0, minf=606 00:09:17.352 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:17.352 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:17.352 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:17.352 issued rwts: total=46238,45968,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:17.352 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:17.352 00:09:17.352 Run status group 0 (all jobs): 00:09:17.352 READ: bw=90.3MiB/s (94.6MB/s), 90.3MiB/s-90.3MiB/s (94.6MB/s-94.6MB/s), io=181MiB (189MB), run=2001-2001msec 00:09:17.352 WRITE: bw=89.7MiB/s (94.1MB/s), 89.7MiB/s-89.7MiB/s (94.1MB/s-94.1MB/s), io=180MiB (188MB), run=2001-2001msec 00:09:17.352 ----------------------------------------------------- 00:09:17.352 Suppressions used: 00:09:17.352 count bytes template 00:09:17.352 1 32 /usr/src/fio/parse.c 00:09:17.352 1 8 libtcmalloc_minimal.so 00:09:17.352 ----------------------------------------------------- 00:09:17.352 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:17.352 10:09:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:17.352 10:09:23 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:17.352 10:09:23 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:17.352 10:09:23 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:17.610 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.610 fio-3.35 00:09:17.610 Starting 1 thread 00:09:21.823 00:09:21.823 test: (groupid=0, jobs=1): err= 0: pid=64293: Mon Nov 4 10:09:26 2024 00:09:21.823 read: IOPS=15.7k, BW=61.4MiB/s (64.3MB/s)(125MiB/2043msec) 00:09:21.823 slat (nsec): min=3345, max=57841, avg=5288.90, stdev=2591.48 00:09:21.823 clat (usec): min=858, max=52657, avg=2942.55, stdev=1931.79 00:09:21.823 lat (usec): min=861, max=52662, avg=2947.84, stdev=1932.26 00:09:21.823 clat percentiles (usec): 00:09:21.823 | 1.00th=[ 1221], 5.00th=[ 1500], 10.00th=[ 1876], 20.00th=[ 2311], 00:09:21.823 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2606], 00:09:21.823 | 70.00th=[ 2769], 80.00th=[ 3294], 90.00th=[ 4228], 95.00th=[ 5342], 00:09:21.823 | 99.00th=[ 9634], 99.50th=[11600], 99.90th=[16450], 99.95th=[44827], 00:09:21.823 | 99.99th=[51643] 00:09:21.823 bw ( KiB/s): min=31256, max=92608, per=100.00%, avg=64128.00, stdev=28924.67, samples=4 00:09:21.823 iops : min= 7814, max=23152, avg=16032.00, stdev=7231.17, samples=4 00:09:21.823 write: IOPS=15.7k, BW=61.5MiB/s (64.5MB/s)(126MiB/2043msec); 0 zone resets 00:09:21.823 slat (nsec): min=3480, max=62176, avg=5532.52, stdev=2547.10 00:09:21.823 clat (usec): min=826, max=93623, avg=5165.66, stdev=6793.35 00:09:21.823 lat (usec): min=831, max=93628, avg=5171.20, stdev=6793.57 00:09:21.823 clat percentiles (usec): 00:09:21.823 | 1.00th=[ 1336], 5.00th=[ 1909], 10.00th=[ 2212], 20.00th=[ 2409], 00:09:21.823 | 30.00th=[ 2474], 40.00th=[ 2540], 50.00th=[ 2638], 60.00th=[ 2802], 00:09:21.823 | 70.00th=[ 3752], 80.00th=[ 6259], 90.00th=[12256], 95.00th=[15139], 00:09:21.823 | 99.00th=[46400], 99.50th=[52691], 99.90th=[76022], 99.95th=[84411], 00:09:21.823 | 99.99th=[90702] 00:09:21.823 bw ( KiB/s): min=31696, max=92392, per=100.00%, avg=64110.00, stdev=28419.58, samples=4 00:09:21.823 iops : min= 7924, max=23098, avg=16027.50, stdev=7104.90, samples=4 00:09:21.823 lat (usec) : 1000=0.04% 00:09:21.823 lat (msec) : 2=9.05%, 4=71.06%, 10=11.98%, 20=7.02%, 50=0.48% 00:09:21.823 lat (msec) : 100=0.38% 00:09:21.823 cpu : usr=99.07%, sys=0.24%, ctx=3, majf=0, minf=607 00:09:21.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:21.823 issued rwts: total=32095,32151,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:21.823 00:09:21.823 Run status group 0 (all jobs): 00:09:21.823 READ: bw=61.4MiB/s (64.3MB/s), 61.4MiB/s-61.4MiB/s (64.3MB/s-64.3MB/s), io=125MiB (131MB), run=2043-2043msec 00:09:21.823 WRITE: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=126MiB (132MB), run=2043-2043msec 00:09:21.823 ----------------------------------------------------- 00:09:21.823 Suppressions used: 00:09:21.823 count bytes template 00:09:21.823 1 32 /usr/src/fio/parse.c 00:09:21.823 1 8 libtcmalloc_minimal.so 00:09:21.823 ----------------------------------------------------- 00:09:21.823 00:09:21.823 10:09:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:21.823 10:09:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:21.823 10:09:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:21.823 10:09:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:21.823 10:09:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:21.823 10:09:27 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:21.823 10:09:27 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:21.823 10:09:27 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:21.823 10:09:27 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:21.823 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:21.823 fio-3.35 00:09:21.823 Starting 1 thread 00:09:29.962 00:09:29.962 test: (groupid=0, jobs=1): err= 0: pid=64354: Mon Nov 4 10:09:34 2024 00:09:29.962 read: IOPS=22.0k, BW=85.9MiB/s (90.0MB/s)(172MiB/2001msec) 00:09:29.962 slat (nsec): min=3373, max=62700, avg=5297.96, stdev=2972.37 00:09:29.962 clat (usec): min=214, max=9108, avg=2903.81, stdev=992.84 00:09:29.962 lat (usec): min=219, max=9118, avg=2909.10, stdev=994.74 00:09:29.962 clat percentiles (usec): 00:09:29.962 | 1.00th=[ 1811], 5.00th=[ 2245], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:29.962 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:09:29.962 | 70.00th=[ 2704], 80.00th=[ 2900], 90.00th=[ 4228], 95.00th=[ 5407], 00:09:29.962 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 8356], 99.95th=[ 8586], 00:09:29.962 | 99.99th=[ 8848] 00:09:29.962 bw ( KiB/s): min=74120, max=91600, per=96.81%, avg=85104.00, stdev=9565.26, samples=3 00:09:29.962 iops : min=18530, max=22900, avg=21276.00, stdev=2391.32, samples=3 00:09:29.962 write: IOPS=21.8k, BW=85.3MiB/s (89.4MB/s)(171MiB/2001msec); 0 zone resets 00:09:29.962 slat (usec): min=3, max=179, avg= 5.56, stdev= 3.27 00:09:29.962 clat (usec): min=229, max=8925, avg=2913.05, stdev=998.79 00:09:29.962 lat (usec): min=234, max=8935, avg=2918.61, stdev=1000.74 00:09:29.962 clat percentiles (usec): 00:09:29.962 | 1.00th=[ 1762], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2474], 00:09:29.962 | 30.00th=[ 2507], 40.00th=[ 2540], 50.00th=[ 2573], 60.00th=[ 2606], 00:09:29.962 | 70.00th=[ 2704], 80.00th=[ 2933], 90.00th=[ 4228], 95.00th=[ 5473], 00:09:29.962 | 99.00th=[ 6718], 99.50th=[ 7242], 99.90th=[ 8029], 99.95th=[ 8455], 00:09:29.962 | 99.99th=[ 8848] 00:09:29.962 bw ( KiB/s): min=74192, max=92080, per=97.63%, avg=85274.67, stdev=9680.74, samples=3 00:09:29.962 iops : min=18548, max=23020, avg=21318.67, stdev=2420.19, samples=3 00:09:29.962 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:29.962 lat (msec) : 2=2.29%, 4=86.59%, 10=11.07% 00:09:29.962 cpu : usr=99.25%, sys=0.00%, ctx=3, majf=0, minf=606 00:09:29.962 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:29.962 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:29.962 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:29.962 issued rwts: total=43978,43696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:29.962 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:29.962 00:09:29.962 Run status group 0 (all jobs): 00:09:29.962 READ: bw=85.9MiB/s (90.0MB/s), 85.9MiB/s-85.9MiB/s (90.0MB/s-90.0MB/s), io=172MiB (180MB), run=2001-2001msec 00:09:29.962 WRITE: bw=85.3MiB/s (89.4MB/s), 85.3MiB/s-85.3MiB/s (89.4MB/s-89.4MB/s), io=171MiB (179MB), run=2001-2001msec 00:09:29.962 ----------------------------------------------------- 00:09:29.962 Suppressions used: 00:09:29.962 count bytes template 00:09:29.962 1 32 /usr/src/fio/parse.c 00:09:29.962 1 8 libtcmalloc_minimal.so 00:09:29.962 ----------------------------------------------------- 00:09:29.962 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:29.962 10:09:34 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:29.962 10:09:34 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:29.962 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:29.962 fio-3.35 00:09:29.962 Starting 1 thread 00:09:42.296 00:09:42.296 test: (groupid=0, jobs=1): err= 0: pid=64426: Mon Nov 4 10:09:45 2024 00:09:42.296 read: IOPS=24.0k, BW=93.6MiB/s (98.2MB/s)(187MiB/2001msec) 00:09:42.296 slat (usec): min=4, max=304, avg= 4.87, stdev= 2.30 00:09:42.296 clat (usec): min=255, max=10068, avg=2664.94, stdev=663.80 00:09:42.296 lat (usec): min=259, max=10110, avg=2669.81, stdev=664.88 00:09:42.296 clat percentiles (usec): 00:09:42.296 | 1.00th=[ 1713], 5.00th=[ 2147], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:42.296 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2540], 00:09:42.296 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2900], 95.00th=[ 3785], 00:09:42.296 | 99.00th=[ 5997], 99.50th=[ 6587], 99.90th=[ 7635], 99.95th=[ 8586], 00:09:42.296 | 99.99th=[ 9896] 00:09:42.296 bw ( KiB/s): min=93792, max=95832, per=99.20%, avg=95109.33, stdev=1142.64, samples=3 00:09:42.296 iops : min=23448, max=23958, avg=23777.33, stdev=285.66, samples=3 00:09:42.296 write: IOPS=23.8k, BW=93.1MiB/s (97.6MB/s)(186MiB/2001msec); 0 zone resets 00:09:42.296 slat (nsec): min=4291, max=51625, avg=5127.55, stdev=1798.26 00:09:42.296 clat (usec): min=225, max=9996, avg=2669.83, stdev=676.73 00:09:42.296 lat (usec): min=229, max=10013, avg=2674.96, stdev=677.78 00:09:42.296 clat percentiles (usec): 00:09:42.296 | 1.00th=[ 1696], 5.00th=[ 2147], 10.00th=[ 2376], 20.00th=[ 2442], 00:09:42.296 | 30.00th=[ 2474], 40.00th=[ 2507], 50.00th=[ 2540], 60.00th=[ 2573], 00:09:42.296 | 70.00th=[ 2606], 80.00th=[ 2671], 90.00th=[ 2900], 95.00th=[ 3818], 00:09:42.296 | 99.00th=[ 6128], 99.50th=[ 6652], 99.90th=[ 7635], 99.95th=[ 8717], 00:09:42.296 | 99.99th=[ 9896] 00:09:42.296 bw ( KiB/s): min=93568, max=96672, per=99.79%, avg=95090.67, stdev=1552.83, samples=3 00:09:42.296 iops : min=23392, max=24168, avg=23772.67, stdev=388.21, samples=3 00:09:42.296 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.03% 00:09:42.296 lat (msec) : 2=3.02%, 4=92.49%, 10=4.43%, 20=0.01% 00:09:42.296 cpu : usr=99.35%, sys=0.00%, ctx=5, majf=0, minf=604 00:09:42.296 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:42.296 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:42.296 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:42.296 issued rwts: total=47962,47667,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:42.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:42.296 00:09:42.296 Run status group 0 (all jobs): 00:09:42.296 READ: bw=93.6MiB/s (98.2MB/s), 93.6MiB/s-93.6MiB/s (98.2MB/s-98.2MB/s), io=187MiB (196MB), run=2001-2001msec 00:09:42.296 WRITE: bw=93.1MiB/s (97.6MB/s), 93.1MiB/s-93.1MiB/s (97.6MB/s-97.6MB/s), io=186MiB (195MB), run=2001-2001msec 00:09:42.296 ----------------------------------------------------- 00:09:42.296 Suppressions used: 00:09:42.296 count bytes template 00:09:42.296 1 32 /usr/src/fio/parse.c 00:09:42.296 1 8 libtcmalloc_minimal.so 00:09:42.296 ----------------------------------------------------- 00:09:42.296 00:09:42.296 ************************************ 00:09:42.296 END TEST nvme_fio 00:09:42.296 ************************************ 00:09:42.296 10:09:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:42.296 10:09:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:42.296 00:09:42.296 real 0m30.408s 00:09:42.296 user 0m25.055s 00:09:42.296 sys 0m6.601s 00:09:42.296 10:09:46 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.296 10:09:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:42.296 ************************************ 00:09:42.296 END TEST nvme 00:09:42.296 ************************************ 00:09:42.296 00:09:42.296 real 1m40.225s 00:09:42.296 user 3m47.228s 00:09:42.296 sys 0m17.286s 00:09:42.296 10:09:46 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:42.296 10:09:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:42.296 10:09:46 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:42.296 10:09:46 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:42.296 10:09:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:42.296 10:09:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:42.296 10:09:46 -- common/autotest_common.sh@10 -- # set +x 00:09:42.296 ************************************ 00:09:42.296 START TEST nvme_scc 00:09:42.296 ************************************ 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:42.296 * Looking for test storage... 00:09:42.296 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:42.296 10:09:46 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:42.296 10:09:46 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:42.296 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.296 --rc genhtml_branch_coverage=1 00:09:42.296 --rc genhtml_function_coverage=1 00:09:42.296 --rc genhtml_legend=1 00:09:42.297 --rc geninfo_all_blocks=1 00:09:42.297 --rc geninfo_unexecuted_blocks=1 00:09:42.297 00:09:42.297 ' 00:09:42.297 10:09:46 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.297 --rc genhtml_branch_coverage=1 00:09:42.297 --rc genhtml_function_coverage=1 00:09:42.297 --rc genhtml_legend=1 00:09:42.297 --rc geninfo_all_blocks=1 00:09:42.297 --rc geninfo_unexecuted_blocks=1 00:09:42.297 00:09:42.297 ' 00:09:42.297 10:09:46 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.297 --rc genhtml_branch_coverage=1 00:09:42.297 --rc genhtml_function_coverage=1 00:09:42.297 --rc genhtml_legend=1 00:09:42.297 --rc geninfo_all_blocks=1 00:09:42.297 --rc geninfo_unexecuted_blocks=1 00:09:42.297 00:09:42.297 ' 00:09:42.297 10:09:46 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:42.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:42.297 --rc genhtml_branch_coverage=1 00:09:42.297 --rc genhtml_function_coverage=1 00:09:42.297 --rc genhtml_legend=1 00:09:42.297 --rc geninfo_all_blocks=1 00:09:42.297 --rc geninfo_unexecuted_blocks=1 00:09:42.297 00:09:42.297 ' 00:09:42.297 10:09:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:42.297 10:09:46 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:42.297 10:09:46 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:42.297 10:09:46 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:42.297 10:09:46 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:42.297 10:09:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.297 10:09:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.297 10:09:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.297 10:09:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:42.297 10:09:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:42.297 10:09:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:42.297 10:09:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.297 10:09:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:42.297 10:09:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:42.297 10:09:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:42.297 10:09:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:42.297 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:42.297 Waiting for block devices as requested 00:09:42.297 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.297 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.297 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:42.297 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:46.484 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:46.484 10:09:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:46.484 10:09:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.484 10:09:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:46.484 10:09:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.484 10:09:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:46.484 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.485 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:46.486 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:46.487 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.488 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:46.489 10:09:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.489 10:09:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:46.489 10:09:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.489 10:09:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.489 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.490 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:46.491 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:46.492 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:46.493 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.757 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:46.758 10:09:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.758 10:09:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:46.758 10:09:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.758 10:09:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:46.758 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:46.759 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:46.760 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.761 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.762 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.763 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:46.764 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:46.765 10:09:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:46.766 10:09:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:46.766 10:09:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:46.766 10:09:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:46.766 10:09:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:46.766 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:46.767 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.768 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:46.769 10:09:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:46.769 10:09:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:46.769 10:09:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:47.344 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.631 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.631 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.631 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.631 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:47.631 10:09:53 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:47.632 10:09:53 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:47.632 10:09:53 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.632 10:09:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:47.632 ************************************ 00:09:47.632 START TEST nvme_simple_copy 00:09:47.632 ************************************ 00:09:47.632 10:09:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:47.890 Initializing NVMe Controllers 00:09:47.890 Attaching to 0000:00:10.0 00:09:47.890 Controller supports SCC. Attached to 0000:00:10.0 00:09:47.890 Namespace ID: 1 size: 6GB 00:09:47.890 Initialization complete. 00:09:47.890 00:09:47.890 Controller QEMU NVMe Ctrl (12340 ) 00:09:47.890 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:47.890 Namespace Block Size:4096 00:09:47.890 Writing LBAs 0 to 63 with Random Data 00:09:47.890 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:47.890 LBAs matching Written Data: 64 00:09:47.890 ************************************ 00:09:47.890 END TEST nvme_simple_copy 00:09:47.890 ************************************ 00:09:47.890 00:09:47.890 real 0m0.259s 00:09:47.890 user 0m0.099s 00:09:47.890 sys 0m0.059s 00:09:47.890 10:09:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.890 10:09:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:48.149 ************************************ 00:09:48.149 END TEST nvme_scc 00:09:48.149 ************************************ 00:09:48.149 00:09:48.149 real 0m7.452s 00:09:48.149 user 0m1.008s 00:09:48.149 sys 0m1.343s 00:09:48.149 10:09:53 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:48.149 10:09:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:48.149 10:09:53 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:48.149 10:09:53 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:48.149 10:09:53 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:48.149 10:09:53 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:48.149 10:09:53 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:48.149 10:09:53 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:48.149 10:09:53 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:48.149 10:09:53 -- common/autotest_common.sh@10 -- # set +x 00:09:48.149 ************************************ 00:09:48.149 START TEST nvme_fdp 00:09:48.149 ************************************ 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:48.149 * Looking for test storage... 00:09:48.149 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.149 --rc genhtml_branch_coverage=1 00:09:48.149 --rc genhtml_function_coverage=1 00:09:48.149 --rc genhtml_legend=1 00:09:48.149 --rc geninfo_all_blocks=1 00:09:48.149 --rc geninfo_unexecuted_blocks=1 00:09:48.149 00:09:48.149 ' 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.149 --rc genhtml_branch_coverage=1 00:09:48.149 --rc genhtml_function_coverage=1 00:09:48.149 --rc genhtml_legend=1 00:09:48.149 --rc geninfo_all_blocks=1 00:09:48.149 --rc geninfo_unexecuted_blocks=1 00:09:48.149 00:09:48.149 ' 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.149 --rc genhtml_branch_coverage=1 00:09:48.149 --rc genhtml_function_coverage=1 00:09:48.149 --rc genhtml_legend=1 00:09:48.149 --rc geninfo_all_blocks=1 00:09:48.149 --rc geninfo_unexecuted_blocks=1 00:09:48.149 00:09:48.149 ' 00:09:48.149 10:09:53 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:48.149 --rc genhtml_branch_coverage=1 00:09:48.149 --rc genhtml_function_coverage=1 00:09:48.149 --rc genhtml_legend=1 00:09:48.149 --rc geninfo_all_blocks=1 00:09:48.149 --rc geninfo_unexecuted_blocks=1 00:09:48.149 00:09:48.149 ' 00:09:48.149 10:09:53 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:48.149 10:09:53 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:48.149 10:09:53 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:48.149 10:09:53 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:48.149 10:09:53 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:48.149 10:09:53 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:48.150 10:09:53 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.150 10:09:53 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.150 10:09:53 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.150 10:09:53 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:48.150 10:09:53 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:48.150 10:09:53 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:48.150 10:09:53 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:48.150 10:09:53 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:48.407 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:48.665 Waiting for block devices as requested 00:09:48.665 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.665 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.922 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.922 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:54.192 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:54.192 10:09:59 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:54.192 10:09:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.192 10:09:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:54.192 10:09:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.192 10:09:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:54.192 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:54.193 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.194 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.195 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:54.196 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.197 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:54.198 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:54.199 10:09:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.199 10:09:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:54.199 10:09:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.199 10:09:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.199 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.200 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.201 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:54.202 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.203 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:54.204 10:09:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.204 10:09:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:54.204 10:09:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.204 10:09:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:54.204 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:54.205 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.221 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.222 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.223 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.224 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.225 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.226 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:54.227 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.228 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:54.229 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.230 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.231 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:54.232 10:09:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:54.232 10:09:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:54.232 10:09:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:54.232 10:09:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:54.232 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.233 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.234 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:54.235 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:54.236 10:09:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:54.236 10:09:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:54.237 10:09:59 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:54.237 10:09:59 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:54.237 10:09:59 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:54.237 10:09:59 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:54.494 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.096 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.096 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.096 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.096 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:55.096 10:10:00 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:55.096 10:10:00 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:55.096 10:10:00 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.096 10:10:00 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:55.096 ************************************ 00:09:55.096 START TEST nvme_flexible_data_placement 00:09:55.096 ************************************ 00:09:55.096 10:10:00 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:55.355 Initializing NVMe Controllers 00:09:55.355 Attaching to 0000:00:13.0 00:09:55.355 Controller supports FDP Attached to 0000:00:13.0 00:09:55.355 Namespace ID: 1 Endurance Group ID: 1 00:09:55.355 Initialization complete. 00:09:55.355 00:09:55.355 ================================== 00:09:55.355 == FDP tests for Namespace: #01 == 00:09:55.355 ================================== 00:09:55.355 00:09:55.355 Get Feature: FDP: 00:09:55.355 ================= 00:09:55.355 Enabled: Yes 00:09:55.355 FDP configuration Index: 0 00:09:55.355 00:09:55.355 FDP configurations log page 00:09:55.355 =========================== 00:09:55.355 Number of FDP configurations: 1 00:09:55.355 Version: 0 00:09:55.355 Size: 112 00:09:55.355 FDP Configuration Descriptor: 0 00:09:55.355 Descriptor Size: 96 00:09:55.355 Reclaim Group Identifier format: 2 00:09:55.355 FDP Volatile Write Cache: Not Present 00:09:55.355 FDP Configuration: Valid 00:09:55.355 Vendor Specific Size: 0 00:09:55.355 Number of Reclaim Groups: 2 00:09:55.355 Number of Recalim Unit Handles: 8 00:09:55.355 Max Placement Identifiers: 128 00:09:55.355 Number of Namespaces Suppprted: 256 00:09:55.355 Reclaim unit Nominal Size: 6000000 bytes 00:09:55.355 Estimated Reclaim Unit Time Limit: Not Reported 00:09:55.355 RUH Desc #000: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #001: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #002: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #003: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #004: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #005: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #006: RUH Type: Initially Isolated 00:09:55.355 RUH Desc #007: RUH Type: Initially Isolated 00:09:55.355 00:09:55.355 FDP reclaim unit handle usage log page 00:09:55.355 ====================================== 00:09:55.355 Number of Reclaim Unit Handles: 8 00:09:55.355 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:55.355 RUH Usage Desc #001: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #002: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #003: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #004: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #005: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #006: RUH Attributes: Unused 00:09:55.355 RUH Usage Desc #007: RUH Attributes: Unused 00:09:55.355 00:09:55.355 FDP statistics log page 00:09:55.355 ======================= 00:09:55.355 Host bytes with metadata written: 930668544 00:09:55.355 Media bytes with metadata written: 930775040 00:09:55.355 Media bytes erased: 0 00:09:55.355 00:09:55.355 FDP Reclaim unit handle status 00:09:55.355 ============================== 00:09:55.355 Number of RUHS descriptors: 2 00:09:55.355 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000004872 00:09:55.355 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:55.355 00:09:55.355 FDP write on placement id: 0 success 00:09:55.355 00:09:55.355 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:55.355 00:09:55.355 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:55.355 00:09:55.355 Get Feature: FDP Events for Placement handle: #0 00:09:55.355 ======================== 00:09:55.355 Number of FDP Events: 6 00:09:55.355 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:55.355 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:55.355 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:55.355 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:55.355 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:55.355 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:55.355 00:09:55.355 FDP events log page 00:09:55.355 =================== 00:09:55.355 Number of FDP events: 1 00:09:55.355 FDP Event #0: 00:09:55.355 Event Type: RU Not Written to Capacity 00:09:55.355 Placement Identifier: Valid 00:09:55.355 NSID: Valid 00:09:55.355 Location: Valid 00:09:55.355 Placement Identifier: 0 00:09:55.355 Event Timestamp: 5 00:09:55.355 Namespace Identifier: 1 00:09:55.355 Reclaim Group Identifier: 0 00:09:55.355 Reclaim Unit Handle Identifier: 0 00:09:55.355 00:09:55.355 FDP test passed 00:09:55.355 00:09:55.355 real 0m0.232s 00:09:55.355 user 0m0.080s 00:09:55.355 sys 0m0.052s 00:09:55.355 10:10:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.355 10:10:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:55.355 ************************************ 00:09:55.355 END TEST nvme_flexible_data_placement 00:09:55.355 ************************************ 00:09:55.355 ************************************ 00:09:55.355 END TEST nvme_fdp 00:09:55.355 ************************************ 00:09:55.355 00:09:55.355 real 0m7.377s 00:09:55.355 user 0m1.012s 00:09:55.355 sys 0m1.289s 00:09:55.355 10:10:01 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:55.355 10:10:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:55.614 10:10:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:55.614 10:10:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:55.614 10:10:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:55.614 10:10:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:55.614 10:10:01 -- common/autotest_common.sh@10 -- # set +x 00:09:55.614 ************************************ 00:09:55.614 START TEST nvme_rpc 00:09:55.614 ************************************ 00:09:55.614 10:10:01 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:55.614 * Looking for test storage... 00:09:55.614 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:55.614 10:10:01 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:55.614 10:10:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:55.614 10:10:01 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.614 10:10:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.615 10:10:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.615 --rc genhtml_branch_coverage=1 00:09:55.615 --rc genhtml_function_coverage=1 00:09:55.615 --rc genhtml_legend=1 00:09:55.615 --rc geninfo_all_blocks=1 00:09:55.615 --rc geninfo_unexecuted_blocks=1 00:09:55.615 00:09:55.615 ' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.615 --rc genhtml_branch_coverage=1 00:09:55.615 --rc genhtml_function_coverage=1 00:09:55.615 --rc genhtml_legend=1 00:09:55.615 --rc geninfo_all_blocks=1 00:09:55.615 --rc geninfo_unexecuted_blocks=1 00:09:55.615 00:09:55.615 ' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.615 --rc genhtml_branch_coverage=1 00:09:55.615 --rc genhtml_function_coverage=1 00:09:55.615 --rc genhtml_legend=1 00:09:55.615 --rc geninfo_all_blocks=1 00:09:55.615 --rc geninfo_unexecuted_blocks=1 00:09:55.615 00:09:55.615 ' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.615 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.615 --rc genhtml_branch_coverage=1 00:09:55.615 --rc genhtml_function_coverage=1 00:09:55.615 --rc genhtml_legend=1 00:09:55.615 --rc geninfo_all_blocks=1 00:09:55.615 --rc geninfo_unexecuted_blocks=1 00:09:55.615 00:09:55.615 ' 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65773 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65773 00:09:55.615 10:10:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65773 ']' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:55.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:55.615 10:10:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:55.874 [2024-11-04 10:10:01.390670] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:09:55.874 [2024-11-04 10:10:01.390811] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65773 ] 00:09:55.874 [2024-11-04 10:10:01.550015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:56.133 [2024-11-04 10:10:01.653768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.133 [2024-11-04 10:10:01.653779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.700 10:10:02 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:56.700 10:10:02 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:56.700 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:56.958 Nvme0n1 00:09:56.958 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:56.958 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:56.958 request: 00:09:56.958 { 00:09:56.958 "bdev_name": "Nvme0n1", 00:09:56.958 "filename": "non_existing_file", 00:09:56.958 "method": "bdev_nvme_apply_firmware", 00:09:56.958 "req_id": 1 00:09:56.958 } 00:09:56.958 Got JSON-RPC error response 00:09:56.958 response: 00:09:56.958 { 00:09:56.958 "code": -32603, 00:09:56.958 "message": "open file failed." 00:09:56.958 } 00:09:57.216 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:57.216 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:57.216 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:57.216 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:57.216 10:10:02 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65773 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65773 ']' 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65773 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65773 00:09:57.216 killing process with pid 65773 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65773' 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65773 00:09:57.216 10:10:02 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65773 00:09:58.587 00:09:58.587 real 0m3.215s 00:09:58.587 user 0m6.155s 00:09:58.587 sys 0m0.486s 00:09:58.587 10:10:04 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:58.587 10:10:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.587 ************************************ 00:09:58.587 END TEST nvme_rpc 00:09:58.587 ************************************ 00:09:58.845 10:10:04 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:58.845 10:10:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:58.845 10:10:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:58.845 10:10:04 -- common/autotest_common.sh@10 -- # set +x 00:09:58.845 ************************************ 00:09:58.845 START TEST nvme_rpc_timeouts 00:09:58.845 ************************************ 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:58.845 * Looking for test storage... 00:09:58.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.845 10:10:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.845 --rc genhtml_branch_coverage=1 00:09:58.845 --rc genhtml_function_coverage=1 00:09:58.845 --rc genhtml_legend=1 00:09:58.845 --rc geninfo_all_blocks=1 00:09:58.845 --rc geninfo_unexecuted_blocks=1 00:09:58.845 00:09:58.845 ' 00:09:58.845 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:58.845 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:58.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.846 --rc genhtml_branch_coverage=1 00:09:58.846 --rc genhtml_function_coverage=1 00:09:58.846 --rc genhtml_legend=1 00:09:58.846 --rc geninfo_all_blocks=1 00:09:58.846 --rc geninfo_unexecuted_blocks=1 00:09:58.846 00:09:58.846 ' 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65838 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65838 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65870 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65870 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65870 ']' 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:58.846 10:10:04 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:58.846 10:10:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:59.104 [2024-11-04 10:10:04.595041] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:09:59.104 [2024-11-04 10:10:04.595162] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65870 ] 00:09:59.104 [2024-11-04 10:10:04.756871] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:59.362 [2024-11-04 10:10:04.857323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.362 [2024-11-04 10:10:04.857409] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.940 Checking default timeout settings: 00:09:59.940 10:10:05 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:59.940 10:10:05 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:09:59.940 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:59.940 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:00.199 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:00.199 Making settings changes with rpc: 00:10:00.199 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:00.456 Check default vs. modified settings: 00:10:00.456 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:00.456 10:10:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 Setting action_on_timeout is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 Setting timeout_us is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:00.715 Setting timeout_admin_us is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65838 /tmp/settings_modified_65838 00:10:00.715 10:10:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65870 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65870 ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65870 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65870 00:10:00.715 killing process with pid 65870 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65870' 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65870 00:10:00.715 10:10:06 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65870 00:10:02.089 10:10:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:02.089 RPC TIMEOUT SETTING TEST PASSED. 00:10:02.089 00:10:02.089 real 0m3.340s 00:10:02.089 user 0m6.502s 00:10:02.089 sys 0m0.504s 00:10:02.089 10:10:07 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.089 ************************************ 00:10:02.089 END TEST nvme_rpc_timeouts 00:10:02.089 ************************************ 00:10:02.089 10:10:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:02.089 10:10:07 -- spdk/autotest.sh@239 -- # uname -s 00:10:02.089 10:10:07 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:02.089 10:10:07 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:02.089 10:10:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:02.089 10:10:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.089 10:10:07 -- common/autotest_common.sh@10 -- # set +x 00:10:02.089 ************************************ 00:10:02.089 START TEST sw_hotplug 00:10:02.089 ************************************ 00:10:02.089 10:10:07 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:02.089 * Looking for test storage... 00:10:02.347 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.347 10:10:07 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:02.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.347 --rc genhtml_branch_coverage=1 00:10:02.347 --rc genhtml_function_coverage=1 00:10:02.347 --rc genhtml_legend=1 00:10:02.347 --rc geninfo_all_blocks=1 00:10:02.347 --rc geninfo_unexecuted_blocks=1 00:10:02.347 00:10:02.347 ' 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:02.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.347 --rc genhtml_branch_coverage=1 00:10:02.347 --rc genhtml_function_coverage=1 00:10:02.347 --rc genhtml_legend=1 00:10:02.347 --rc geninfo_all_blocks=1 00:10:02.347 --rc geninfo_unexecuted_blocks=1 00:10:02.347 00:10:02.347 ' 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:02.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.347 --rc genhtml_branch_coverage=1 00:10:02.347 --rc genhtml_function_coverage=1 00:10:02.347 --rc genhtml_legend=1 00:10:02.347 --rc geninfo_all_blocks=1 00:10:02.347 --rc geninfo_unexecuted_blocks=1 00:10:02.347 00:10:02.347 ' 00:10:02.347 10:10:07 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:02.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.347 --rc genhtml_branch_coverage=1 00:10:02.347 --rc genhtml_function_coverage=1 00:10:02.347 --rc genhtml_legend=1 00:10:02.347 --rc geninfo_all_blocks=1 00:10:02.347 --rc geninfo_unexecuted_blocks=1 00:10:02.347 00:10:02.347 ' 00:10:02.347 10:10:07 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:02.605 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:02.605 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:02.605 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:02.605 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:02.605 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:02.605 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:02.605 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:02.605 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:02.605 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:02.605 10:10:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:02.863 10:10:08 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.863 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:02.863 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:02.863 10:10:08 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:03.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:03.121 Waiting for block devices as requested 00:10:03.121 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.379 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.379 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:03.379 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:08.639 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:08.639 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:08.639 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:08.897 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:08.897 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:08.897 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:09.155 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:09.155 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:09.414 10:10:14 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:09.414 10:10:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66720 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:09.414 10:10:15 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:09.414 10:10:15 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:09.414 10:10:15 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:09.414 10:10:15 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:09.414 10:10:15 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:09.414 10:10:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:09.671 Initializing NVMe Controllers 00:10:09.671 Attaching to 0000:00:10.0 00:10:09.671 Attaching to 0000:00:11.0 00:10:09.671 Attached to 0000:00:10.0 00:10:09.671 Attached to 0000:00:11.0 00:10:09.671 Initialization complete. Starting I/O... 00:10:09.672 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:09.672 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:09.672 00:10:10.623 QEMU NVMe Ctrl (12340 ): 2494 I/Os completed (+2494) 00:10:10.623 QEMU NVMe Ctrl (12341 ): 2515 I/Os completed (+2515) 00:10:10.623 00:10:11.556 QEMU NVMe Ctrl (12340 ): 5569 I/Os completed (+3075) 00:10:11.556 QEMU NVMe Ctrl (12341 ): 5580 I/Os completed (+3065) 00:10:11.556 00:10:12.490 QEMU NVMe Ctrl (12340 ): 9112 I/Os completed (+3543) 00:10:12.490 QEMU NVMe Ctrl (12341 ): 9126 I/Os completed (+3546) 00:10:12.490 00:10:13.863 QEMU NVMe Ctrl (12340 ): 12692 I/Os completed (+3580) 00:10:13.863 QEMU NVMe Ctrl (12341 ): 12713 I/Os completed (+3587) 00:10:13.863 00:10:14.793 QEMU NVMe Ctrl (12340 ): 15703 I/Os completed (+3011) 00:10:14.793 QEMU NVMe Ctrl (12341 ): 15748 I/Os completed (+3035) 00:10:14.793 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.360 [2024-11-04 10:10:21.030857] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:15.360 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:15.360 [2024-11-04 10:10:21.032026] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.032077] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.032095] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.032113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:15.360 [2024-11-04 10:10:21.033995] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.034038] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.034052] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.034066] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:15.360 [2024-11-04 10:10:21.054839] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:15.360 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:15.360 [2024-11-04 10:10:21.055891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.055930] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.055950] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.055964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:15.360 [2024-11-04 10:10:21.057622] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.057657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.057672] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 [2024-11-04 10:10:21.057684] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:15.360 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:15.360 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:15.360 EAL: Scan for (pci) bus failed. 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:15.675 Attaching to 0000:00:10.0 00:10:15.675 Attached to 0000:00:10.0 00:10:15.675 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:15.675 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:15.675 10:10:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:15.675 Attaching to 0000:00:11.0 00:10:15.675 Attached to 0000:00:11.0 00:10:16.607 QEMU NVMe Ctrl (12340 ): 3600 I/Os completed (+3600) 00:10:16.607 QEMU NVMe Ctrl (12341 ): 3297 I/Os completed (+3297) 00:10:16.607 00:10:17.541 QEMU NVMe Ctrl (12340 ): 7229 I/Os completed (+3629) 00:10:17.541 QEMU NVMe Ctrl (12341 ): 6776 I/Os completed (+3479) 00:10:17.541 00:10:18.913 QEMU NVMe Ctrl (12340 ): 10449 I/Os completed (+3220) 00:10:18.913 QEMU NVMe Ctrl (12341 ): 10087 I/Os completed (+3311) 00:10:18.913 00:10:19.485 QEMU NVMe Ctrl (12340 ): 13595 I/Os completed (+3146) 00:10:19.485 QEMU NVMe Ctrl (12341 ): 13256 I/Os completed (+3169) 00:10:19.485 00:10:20.856 QEMU NVMe Ctrl (12340 ): 17243 I/Os completed (+3648) 00:10:20.856 QEMU NVMe Ctrl (12341 ): 16922 I/Os completed (+3666) 00:10:20.856 00:10:21.791 QEMU NVMe Ctrl (12340 ): 20888 I/Os completed (+3645) 00:10:21.791 QEMU NVMe Ctrl (12341 ): 20580 I/Os completed (+3658) 00:10:21.791 00:10:22.724 QEMU NVMe Ctrl (12340 ): 24018 I/Os completed (+3130) 00:10:22.724 QEMU NVMe Ctrl (12341 ): 23736 I/Os completed (+3156) 00:10:22.724 00:10:23.657 QEMU NVMe Ctrl (12340 ): 27306 I/Os completed (+3288) 00:10:23.657 QEMU NVMe Ctrl (12341 ): 27035 I/Os completed (+3299) 00:10:23.657 00:10:24.588 QEMU NVMe Ctrl (12340 ): 30931 I/Os completed (+3625) 00:10:24.588 QEMU NVMe Ctrl (12341 ): 30664 I/Os completed (+3629) 00:10:24.588 00:10:25.520 QEMU NVMe Ctrl (12340 ): 34042 I/Os completed (+3111) 00:10:25.520 QEMU NVMe Ctrl (12341 ): 33786 I/Os completed (+3122) 00:10:25.520 00:10:26.910 QEMU NVMe Ctrl (12340 ): 37041 I/Os completed (+2999) 00:10:26.910 QEMU NVMe Ctrl (12341 ): 36903 I/Os completed (+3117) 00:10:26.910 00:10:27.847 QEMU NVMe Ctrl (12340 ): 40109 I/Os completed (+3068) 00:10:27.847 QEMU NVMe Ctrl (12341 ): 40107 I/Os completed (+3204) 00:10:27.847 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.847 [2024-11-04 10:10:33.301547] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:27.847 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:27.847 [2024-11-04 10:10:33.302691] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.302772] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.302806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.302824] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:27.847 [2024-11-04 10:10:33.304759] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.304830] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.304845] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.304861] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:27.847 [2024-11-04 10:10:33.325536] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:27.847 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:27.847 [2024-11-04 10:10:33.326591] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.326628] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.326651] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.326667] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:27.847 [2024-11-04 10:10:33.329855] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.329891] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.329905] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 [2024-11-04 10:10:33.329919] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:27.847 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:27.847 EAL: Scan for (pci) bus failed. 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:27.847 Attaching to 0000:00:10.0 00:10:27.847 Attached to 0000:00:10.0 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:27.847 10:10:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:27.847 Attaching to 0000:00:11.0 00:10:27.847 Attached to 0000:00:11.0 00:10:28.780 QEMU NVMe Ctrl (12340 ): 2348 I/Os completed (+2348) 00:10:28.780 QEMU NVMe Ctrl (12341 ): 2072 I/Os completed (+2072) 00:10:28.780 00:10:29.711 QEMU NVMe Ctrl (12340 ): 5395 I/Os completed (+3047) 00:10:29.711 QEMU NVMe Ctrl (12341 ): 5105 I/Os completed (+3033) 00:10:29.711 00:10:30.657 QEMU NVMe Ctrl (12340 ): 8503 I/Os completed (+3108) 00:10:30.657 QEMU NVMe Ctrl (12341 ): 8226 I/Os completed (+3121) 00:10:30.657 00:10:31.590 QEMU NVMe Ctrl (12340 ): 11633 I/Os completed (+3130) 00:10:31.590 QEMU NVMe Ctrl (12341 ): 11504 I/Os completed (+3278) 00:10:31.590 00:10:32.523 QEMU NVMe Ctrl (12340 ): 14723 I/Os completed (+3090) 00:10:32.523 QEMU NVMe Ctrl (12341 ): 14580 I/Os completed (+3076) 00:10:32.523 00:10:33.895 QEMU NVMe Ctrl (12340 ): 17927 I/Os completed (+3204) 00:10:33.895 QEMU NVMe Ctrl (12341 ): 18015 I/Os completed (+3435) 00:10:33.895 00:10:34.856 QEMU NVMe Ctrl (12340 ): 21099 I/Os completed (+3172) 00:10:34.856 QEMU NVMe Ctrl (12341 ): 21123 I/Os completed (+3108) 00:10:34.856 00:10:35.789 QEMU NVMe Ctrl (12340 ): 24217 I/Os completed (+3118) 00:10:35.789 QEMU NVMe Ctrl (12341 ): 24329 I/Os completed (+3206) 00:10:35.789 00:10:36.721 QEMU NVMe Ctrl (12340 ): 27357 I/Os completed (+3140) 00:10:36.721 QEMU NVMe Ctrl (12341 ): 27415 I/Os completed (+3086) 00:10:36.721 00:10:37.667 QEMU NVMe Ctrl (12340 ): 30419 I/Os completed (+3062) 00:10:37.667 QEMU NVMe Ctrl (12341 ): 30434 I/Os completed (+3019) 00:10:37.667 00:10:38.607 QEMU NVMe Ctrl (12340 ): 33615 I/Os completed (+3196) 00:10:38.607 QEMU NVMe Ctrl (12341 ): 33632 I/Os completed (+3198) 00:10:38.608 00:10:39.541 QEMU NVMe Ctrl (12340 ): 37191 I/Os completed (+3576) 00:10:39.541 QEMU NVMe Ctrl (12341 ): 37206 I/Os completed (+3574) 00:10:39.541 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.107 [2024-11-04 10:10:45.562723] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:40.107 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:40.107 [2024-11-04 10:10:45.563703] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.563742] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.563758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.563773] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:40.107 [2024-11-04 10:10:45.565751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.565807] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.565822] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.565835] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:40.107 [2024-11-04 10:10:45.588572] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:40.107 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:40.107 [2024-11-04 10:10:45.589457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.589491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.589505] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.589517] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:40.107 [2024-11-04 10:10:45.590887] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.590915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.590929] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 [2024-11-04 10:10:45.590939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:40.107 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:40.107 EAL: Scan for (pci) bus failed. 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:40.107 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:40.108 Attaching to 0000:00:10.0 00:10:40.108 Attached to 0000:00:10.0 00:10:40.108 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:40.108 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:40.108 10:10:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:40.108 Attaching to 0000:00:11.0 00:10:40.365 Attached to 0000:00:11.0 00:10:40.365 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:40.365 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:40.365 [2024-11-04 10:10:45.857220] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:52.552 10:10:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:52.552 10:10:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:52.552 10:10:57 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.83 00:10:52.552 10:10:57 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.83 00:10:52.552 10:10:57 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:52.552 10:10:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.83 00:10:52.552 10:10:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.83 2 00:10:52.552 remove_attach_helper took 42.83s to complete (handling 2 nvme drive(s)) 10:10:57 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66720 00:10:59.131 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66720) - No such process 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66720 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67270 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67270 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67270 ']' 00:10:59.131 10:11:03 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.131 10:11:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.131 [2024-11-04 10:11:03.939142] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:10:59.131 [2024-11-04 10:11:03.939266] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67270 ] 00:10:59.131 [2024-11-04 10:11:04.097324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:59.131 [2024-11-04 10:11:04.196923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:10:59.131 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:59.131 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:59.131 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:59.131 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:59.131 10:11:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:59.132 10:11:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:59.132 10:11:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:59.132 10:11:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:59.132 10:11:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:10:59.132 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:59.132 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:59.132 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:59.132 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:59.132 10:11:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.691 10:11:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.691 10:11:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.691 10:11:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:05.691 10:11:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:05.691 [2024-11-04 10:11:10.890138] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:05.691 [2024-11-04 10:11:10.891466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:10.891504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:10.891517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:10.891536] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:10.891543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:10.891551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:10.891558] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:10.891566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:10.891573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:10.891584] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:10.891590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:10.891597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:05.691 10:11:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.691 10:11:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:05.691 [2024-11-04 10:11:11.390130] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:05.691 [2024-11-04 10:11:11.391342] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:11.391376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:11.391389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:11.391404] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:11.391413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:11.391420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:11.391429] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:11.391435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:11.391443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 [2024-11-04 10:11:11.391450] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:05.691 [2024-11-04 10:11:11.391458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:05.691 [2024-11-04 10:11:11.391465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:05.691 10:11:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:05.691 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:06.258 10:11:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.258 10:11:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:06.258 10:11:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:06.258 10:11:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:06.515 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:06.516 10:11:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:19.094 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:19.094 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:19.094 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:19.094 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.094 10:11:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:19.095 [2024-11-04 10:11:24.290342] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:19.095 [2024-11-04 10:11:24.291714] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.291751] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.291791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.291804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.291815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.291823] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.291831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.291838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.291847] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.291857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.291866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:19.095 [2024-11-04 10:11:24.790367] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:19.095 [2024-11-04 10:11:24.791729] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.791762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.791776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.791804] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.791813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.791819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.791829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.791835] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.791844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 [2024-11-04 10:11:24.791851] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:19.095 [2024-11-04 10:11:24.791859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:19.095 [2024-11-04 10:11:24.791865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:19.095 10:11:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:19.095 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:19.352 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.352 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.352 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:19.353 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:19.353 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.353 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:19.353 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:19.353 10:11:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:19.353 10:11:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:19.353 10:11:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:19.353 10:11:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.542 [2024-11-04 10:11:37.090560] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:31.542 [2024-11-04 10:11:37.092056] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.542 [2024-11-04 10:11:37.092084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.542 [2024-11-04 10:11:37.092095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.542 [2024-11-04 10:11:37.092113] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.542 [2024-11-04 10:11:37.092121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.542 [2024-11-04 10:11:37.092131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.542 [2024-11-04 10:11:37.092139] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.542 [2024-11-04 10:11:37.092147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.542 [2024-11-04 10:11:37.092154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.542 [2024-11-04 10:11:37.092163] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:31.542 [2024-11-04 10:11:37.092170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:31.542 [2024-11-04 10:11:37.092178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:31.542 10:11:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.542 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:31.543 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.109 10:11:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.109 10:11:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.109 10:11:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.109 [2024-11-04 10:11:37.690564] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:32.109 [2024-11-04 10:11:37.691883] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.109 [2024-11-04 10:11:37.691911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.109 [2024-11-04 10:11:37.691923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.109 [2024-11-04 10:11:37.691939] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.109 [2024-11-04 10:11:37.691947] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.109 [2024-11-04 10:11:37.691954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.109 [2024-11-04 10:11:37.691964] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.109 [2024-11-04 10:11:37.691970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.109 [2024-11-04 10:11:37.691980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.109 [2024-11-04 10:11:37.691988] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:32.109 [2024-11-04 10:11:37.691995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:32.109 [2024-11-04 10:11:37.692002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:32.109 10:11:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:32.675 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:32.676 10:11:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.676 10:11:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:32.676 10:11:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:32.676 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:32.934 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:32.934 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:32.934 10:11:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:45.134 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:45.134 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:45.134 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.73 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.73 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.73 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.73 2 00:11:45.135 remove_attach_helper took 45.73s to complete (handling 2 nvme drive(s)) 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:45.135 10:11:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:45.135 10:11:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.696 10:11:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.696 10:11:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.696 10:11:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:51.696 10:11:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:51.696 [2024-11-04 10:11:56.654663] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:51.696 [2024-11-04 10:11:56.655690] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:56.655727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:56.655738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:56.655756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:56.655763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:56.655771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:56.655778] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:56.655796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:56.655802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:56.655811] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:56.655818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:56.655828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:57.054668] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:51.696 [2024-11-04 10:11:57.055694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:57.055723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:57.055735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:57.055752] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:57.055761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:57.055768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:57.055779] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:57.055797] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:57.055806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 [2024-11-04 10:11:57.055813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:51.696 [2024-11-04 10:11:57.055830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:51.696 [2024-11-04 10:11:57.055837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.696 10:11:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.696 10:11:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.696 10:11:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:51.696 10:11:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.890 [2024-11-04 10:12:09.454886] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:03.890 [2024-11-04 10:12:09.456398] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.890 [2024-11-04 10:12:09.456429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.890 [2024-11-04 10:12:09.456440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.890 [2024-11-04 10:12:09.456457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.890 [2024-11-04 10:12:09.456465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.890 [2024-11-04 10:12:09.456473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.890 [2024-11-04 10:12:09.456481] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.890 [2024-11-04 10:12:09.456489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.890 [2024-11-04 10:12:09.456495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.890 [2024-11-04 10:12:09.456504] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:03.890 [2024-11-04 10:12:09.456510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:03.890 [2024-11-04 10:12:09.456518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:03.890 10:12:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:03.890 10:12:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:04.147 [2024-11-04 10:12:09.854890] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:04.147 [2024-11-04 10:12:09.855859] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.147 [2024-11-04 10:12:09.855887] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.147 [2024-11-04 10:12:09.855899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.147 [2024-11-04 10:12:09.855915] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.147 [2024-11-04 10:12:09.855926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.147 [2024-11-04 10:12:09.855933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.147 [2024-11-04 10:12:09.855942] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.147 [2024-11-04 10:12:09.855949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.147 [2024-11-04 10:12:09.855958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.147 [2024-11-04 10:12:09.855967] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:04.147 [2024-11-04 10:12:09.855974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:04.147 [2024-11-04 10:12:09.855981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:04.405 10:12:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.405 10:12:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:04.405 10:12:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.405 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:04.663 10:12:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.855 10:12:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.855 10:12:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.855 10:12:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:16.855 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:16.856 10:12:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.856 10:12:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:16.856 [2024-11-04 10:12:22.355083] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:16.856 [2024-11-04 10:12:22.356330] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.856 [2024-11-04 10:12:22.356364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.856 [2024-11-04 10:12:22.356376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.856 [2024-11-04 10:12:22.356394] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.856 [2024-11-04 10:12:22.356401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.856 [2024-11-04 10:12:22.356409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.856 [2024-11-04 10:12:22.356416] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.856 [2024-11-04 10:12:22.356429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.856 [2024-11-04 10:12:22.356436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.856 [2024-11-04 10:12:22.356444] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:16.856 [2024-11-04 10:12:22.356450] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:16.856 [2024-11-04 10:12:22.356458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:16.856 10:12:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:16.856 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:17.113 [2024-11-04 10:12:22.755090] nvme_ctrlr.c:1096:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:17.113 [2024-11-04 10:12:22.756407] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.113 [2024-11-04 10:12:22.756435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.113 [2024-11-04 10:12:22.756447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.113 [2024-11-04 10:12:22.756462] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.113 [2024-11-04 10:12:22.756472] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.113 [2024-11-04 10:12:22.756479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.113 [2024-11-04 10:12:22.756487] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.113 [2024-11-04 10:12:22.756494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.113 [2024-11-04 10:12:22.756504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.113 [2024-11-04 10:12:22.756512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:17.113 [2024-11-04 10:12:22.756523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:17.113 [2024-11-04 10:12:22.756529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:17.371 10:12:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.371 10:12:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:17.371 10:12:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.371 10:12:22 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:17.371 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:17.371 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.372 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:17.372 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:17.372 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:17.629 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:17.629 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:17.629 10:12:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.62 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.62 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:12:29.853 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:29.853 10:12:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67270 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67270 ']' 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67270 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67270 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:29.853 killing process with pid 67270 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67270' 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67270 00:12:29.853 10:12:35 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67270 00:12:30.809 10:12:36 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:31.067 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:31.326 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.326 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:31.583 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:31.583 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:31.583 00:12:31.583 real 2m29.473s 00:12:31.583 user 1m51.727s 00:12:31.583 sys 0m16.519s 00:12:31.583 10:12:37 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:31.583 10:12:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 ************************************ 00:12:31.583 END TEST sw_hotplug 00:12:31.583 ************************************ 00:12:31.583 10:12:37 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:31.583 10:12:37 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:31.583 10:12:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:31.583 10:12:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.583 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:12:31.583 ************************************ 00:12:31.583 START TEST nvme_xnvme 00:12:31.584 ************************************ 00:12:31.584 10:12:37 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:31.584 * Looking for test storage... 00:12:31.842 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.842 --rc genhtml_branch_coverage=1 00:12:31.842 --rc genhtml_function_coverage=1 00:12:31.842 --rc genhtml_legend=1 00:12:31.842 --rc geninfo_all_blocks=1 00:12:31.842 --rc geninfo_unexecuted_blocks=1 00:12:31.842 00:12:31.842 ' 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.842 --rc genhtml_branch_coverage=1 00:12:31.842 --rc genhtml_function_coverage=1 00:12:31.842 --rc genhtml_legend=1 00:12:31.842 --rc geninfo_all_blocks=1 00:12:31.842 --rc geninfo_unexecuted_blocks=1 00:12:31.842 00:12:31.842 ' 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.842 --rc genhtml_branch_coverage=1 00:12:31.842 --rc genhtml_function_coverage=1 00:12:31.842 --rc genhtml_legend=1 00:12:31.842 --rc geninfo_all_blocks=1 00:12:31.842 --rc geninfo_unexecuted_blocks=1 00:12:31.842 00:12:31.842 ' 00:12:31.842 10:12:37 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:31.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:31.842 --rc genhtml_branch_coverage=1 00:12:31.842 --rc genhtml_function_coverage=1 00:12:31.842 --rc genhtml_legend=1 00:12:31.842 --rc geninfo_all_blocks=1 00:12:31.842 --rc geninfo_unexecuted_blocks=1 00:12:31.842 00:12:31.842 ' 00:12:31.842 10:12:37 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:31.842 10:12:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:31.842 10:12:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.842 10:12:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.842 10:12:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.842 10:12:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:31.843 10:12:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:31.843 10:12:37 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:31.843 10:12:37 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:31.843 10:12:37 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:31.843 10:12:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.843 ************************************ 00:12:31.843 START TEST xnvme_to_malloc_dd_copy 00:12:31.843 ************************************ 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:31.843 10:12:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:31.843 { 00:12:31.843 "subsystems": [ 00:12:31.843 { 00:12:31.843 "subsystem": "bdev", 00:12:31.843 "config": [ 00:12:31.843 { 00:12:31.843 "params": { 00:12:31.843 "block_size": 512, 00:12:31.843 "num_blocks": 2097152, 00:12:31.843 "name": "malloc0" 00:12:31.843 }, 00:12:31.843 "method": "bdev_malloc_create" 00:12:31.843 }, 00:12:31.843 { 00:12:31.843 "params": { 00:12:31.843 "io_mechanism": "libaio", 00:12:31.843 "filename": "/dev/nullb0", 00:12:31.843 "name": "null0" 00:12:31.843 }, 00:12:31.843 "method": "bdev_xnvme_create" 00:12:31.843 }, 00:12:31.843 { 00:12:31.843 "method": "bdev_wait_for_examine" 00:12:31.843 } 00:12:31.843 ] 00:12:31.843 } 00:12:31.843 ] 00:12:31.843 } 00:12:31.843 [2024-11-04 10:12:37.494308] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:12:31.843 [2024-11-04 10:12:37.494427] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68648 ] 00:12:32.101 [2024-11-04 10:12:37.652770] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.101 [2024-11-04 10:12:37.752602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.996  [2024-11-04T10:12:41.110Z] Copying: 230/1024 [MB] (230 MBps) [2024-11-04T10:12:42.041Z] Copying: 461/1024 [MB] (230 MBps) [2024-11-04T10:12:42.973Z] Copying: 693/1024 [MB] (232 MBps) [2024-11-04T10:12:42.973Z] Copying: 972/1024 [MB] (278 MBps) [2024-11-04T10:12:44.874Z] Copying: 1024/1024 [MB] (average 245 MBps) 00:12:39.129 00:12:39.129 10:12:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:39.129 10:12:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:39.129 10:12:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:39.129 10:12:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:39.387 { 00:12:39.387 "subsystems": [ 00:12:39.387 { 00:12:39.387 "subsystem": "bdev", 00:12:39.387 "config": [ 00:12:39.387 { 00:12:39.387 "params": { 00:12:39.387 "block_size": 512, 00:12:39.387 "num_blocks": 2097152, 00:12:39.387 "name": "malloc0" 00:12:39.387 }, 00:12:39.387 "method": "bdev_malloc_create" 00:12:39.387 }, 00:12:39.387 { 00:12:39.387 "params": { 00:12:39.387 "io_mechanism": "libaio", 00:12:39.387 "filename": "/dev/nullb0", 00:12:39.387 "name": "null0" 00:12:39.387 }, 00:12:39.387 "method": "bdev_xnvme_create" 00:12:39.387 }, 00:12:39.387 { 00:12:39.387 "method": "bdev_wait_for_examine" 00:12:39.387 } 00:12:39.387 ] 00:12:39.387 } 00:12:39.387 ] 00:12:39.387 } 00:12:39.387 [2024-11-04 10:12:44.902385] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:12:39.387 [2024-11-04 10:12:44.902499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68736 ] 00:12:39.387 [2024-11-04 10:12:45.058200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.647 [2024-11-04 10:12:45.144655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.588  [2024-11-04T10:12:48.264Z] Copying: 303/1024 [MB] (303 MBps) [2024-11-04T10:12:49.195Z] Copying: 608/1024 [MB] (305 MBps) [2024-11-04T10:12:49.453Z] Copying: 912/1024 [MB] (304 MBps) [2024-11-04T10:12:51.354Z] Copying: 1024/1024 [MB] (average 304 MBps) 00:12:45.609 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:45.609 10:12:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.609 { 00:12:45.609 "subsystems": [ 00:12:45.609 { 00:12:45.609 "subsystem": "bdev", 00:12:45.609 "config": [ 00:12:45.609 { 00:12:45.609 "params": { 00:12:45.609 "block_size": 512, 00:12:45.609 "num_blocks": 2097152, 00:12:45.609 "name": "malloc0" 00:12:45.609 }, 00:12:45.609 "method": "bdev_malloc_create" 00:12:45.609 }, 00:12:45.609 { 00:12:45.609 "params": { 00:12:45.609 "io_mechanism": "io_uring", 00:12:45.609 "filename": "/dev/nullb0", 00:12:45.609 "name": "null0" 00:12:45.609 }, 00:12:45.609 "method": "bdev_xnvme_create" 00:12:45.609 }, 00:12:45.609 { 00:12:45.609 "method": "bdev_wait_for_examine" 00:12:45.609 } 00:12:45.609 ] 00:12:45.609 } 00:12:45.609 ] 00:12:45.609 } 00:12:45.609 [2024-11-04 10:12:51.300361] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:12:45.609 [2024-11-04 10:12:51.300609] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68812 ] 00:12:45.868 [2024-11-04 10:12:51.457684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.868 [2024-11-04 10:12:51.540983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.772  [2024-11-04T10:12:54.451Z] Copying: 292/1024 [MB] (292 MBps) [2024-11-04T10:12:55.382Z] Copying: 586/1024 [MB] (293 MBps) [2024-11-04T10:12:55.946Z] Copying: 879/1024 [MB] (292 MBps) [2024-11-04T10:12:57.845Z] Copying: 1024/1024 [MB] (average 293 MBps) 00:12:52.100 00:12:52.100 10:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:52.100 10:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:52.100 10:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:52.100 10:12:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:52.100 { 00:12:52.100 "subsystems": [ 00:12:52.100 { 00:12:52.100 "subsystem": "bdev", 00:12:52.100 "config": [ 00:12:52.100 { 00:12:52.100 "params": { 00:12:52.100 "block_size": 512, 00:12:52.100 "num_blocks": 2097152, 00:12:52.100 "name": "malloc0" 00:12:52.100 }, 00:12:52.100 "method": "bdev_malloc_create" 00:12:52.100 }, 00:12:52.100 { 00:12:52.100 "params": { 00:12:52.100 "io_mechanism": "io_uring", 00:12:52.100 "filename": "/dev/nullb0", 00:12:52.100 "name": "null0" 00:12:52.100 }, 00:12:52.100 "method": "bdev_xnvme_create" 00:12:52.100 }, 00:12:52.100 { 00:12:52.100 "method": "bdev_wait_for_examine" 00:12:52.100 } 00:12:52.100 ] 00:12:52.100 } 00:12:52.100 ] 00:12:52.100 } 00:12:52.101 [2024-11-04 10:12:57.788911] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:12:52.101 [2024-11-04 10:12:57.789003] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68894 ] 00:12:52.358 [2024-11-04 10:12:57.938086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.358 [2024-11-04 10:12:58.019188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.256  [2024-11-04T10:13:00.933Z] Copying: 306/1024 [MB] (306 MBps) [2024-11-04T10:13:01.867Z] Copying: 609/1024 [MB] (303 MBps) [2024-11-04T10:13:02.433Z] Copying: 915/1024 [MB] (306 MBps) [2024-11-04T10:13:04.333Z] Copying: 1024/1024 [MB] (average 305 MBps) 00:12:58.588 00:12:58.588 10:13:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:12:58.588 10:13:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:12:58.588 ************************************ 00:12:58.588 END TEST xnvme_to_malloc_dd_copy 00:12:58.588 ************************************ 00:12:58.588 00:12:58.588 real 0m26.698s 00:12:58.588 user 0m23.656s 00:12:58.588 sys 0m2.506s 00:12:58.588 10:13:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:58.588 10:13:04 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 10:13:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:58.588 10:13:04 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:58.588 10:13:04 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:58.588 10:13:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 ************************************ 00:12:58.588 START TEST xnvme_bdevperf 00:12:58.588 ************************************ 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:58.588 10:13:04 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:58.588 { 00:12:58.588 "subsystems": [ 00:12:58.588 { 00:12:58.588 "subsystem": "bdev", 00:12:58.588 "config": [ 00:12:58.588 { 00:12:58.588 "params": { 00:12:58.588 "io_mechanism": "libaio", 00:12:58.588 "filename": "/dev/nullb0", 00:12:58.588 "name": "null0" 00:12:58.588 }, 00:12:58.588 "method": "bdev_xnvme_create" 00:12:58.588 }, 00:12:58.588 { 00:12:58.588 "method": "bdev_wait_for_examine" 00:12:58.588 } 00:12:58.588 ] 00:12:58.588 } 00:12:58.588 ] 00:12:58.588 } 00:12:58.588 [2024-11-04 10:13:04.222292] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:12:58.588 [2024-11-04 10:13:04.222407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68993 ] 00:12:58.846 [2024-11-04 10:13:04.378028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.846 [2024-11-04 10:13:04.458651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.104 Running I/O for 5 seconds... 00:13:00.971 191680.00 IOPS, 748.75 MiB/s [2024-11-04T10:13:08.089Z] 195616.00 IOPS, 764.12 MiB/s [2024-11-04T10:13:08.689Z] 197376.00 IOPS, 771.00 MiB/s [2024-11-04T10:13:10.063Z] 198272.00 IOPS, 774.50 MiB/s 00:13:04.318 Latency(us) 00:13:04.318 [2024-11-04T10:13:10.063Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:04.318 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:04.318 null0 : 5.00 198612.56 775.83 0.00 0.00 320.00 107.91 1651.00 00:13:04.318 [2024-11-04T10:13:10.063Z] =================================================================================================================== 00:13:04.318 [2024-11-04T10:13:10.063Z] Total : 198612.56 775.83 0.00 0.00 320.00 107.91 1651.00 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.576 10:13:10 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:04.576 { 00:13:04.576 "subsystems": [ 00:13:04.576 { 00:13:04.576 "subsystem": "bdev", 00:13:04.576 "config": [ 00:13:04.576 { 00:13:04.576 "params": { 00:13:04.576 "io_mechanism": "io_uring", 00:13:04.576 "filename": "/dev/nullb0", 00:13:04.576 "name": "null0" 00:13:04.576 }, 00:13:04.576 "method": "bdev_xnvme_create" 00:13:04.576 }, 00:13:04.576 { 00:13:04.576 "method": "bdev_wait_for_examine" 00:13:04.576 } 00:13:04.576 ] 00:13:04.576 } 00:13:04.576 ] 00:13:04.576 } 00:13:04.576 [2024-11-04 10:13:10.313239] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:04.576 [2024-11-04 10:13:10.313351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69067 ] 00:13:04.834 [2024-11-04 10:13:10.469922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.834 [2024-11-04 10:13:10.552352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.092 Running I/O for 5 seconds... 00:13:07.401 228032.00 IOPS, 890.75 MiB/s [2024-11-04T10:13:14.080Z] 228064.00 IOPS, 890.88 MiB/s [2024-11-04T10:13:15.019Z] 228352.00 IOPS, 892.00 MiB/s [2024-11-04T10:13:15.957Z] 228448.00 IOPS, 892.38 MiB/s 00:13:10.212 Latency(us) 00:13:10.212 [2024-11-04T10:13:15.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.212 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.212 null0 : 5.00 228382.11 892.12 0.00 0.00 277.92 157.54 1613.19 00:13:10.212 [2024-11-04T10:13:15.957Z] =================================================================================================================== 00:13:10.212 [2024-11-04T10:13:15.957Z] Total : 228382.11 892.12 0.00 0.00 277.92 157.54 1613.19 00:13:10.778 10:13:16 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:10.778 10:13:16 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:10.778 00:13:10.778 real 0m12.185s 00:13:10.778 user 0m9.853s 00:13:10.778 sys 0m2.105s 00:13:10.778 10:13:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.778 10:13:16 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.778 ************************************ 00:13:10.778 END TEST xnvme_bdevperf 00:13:10.778 ************************************ 00:13:10.778 00:13:10.778 real 0m39.097s 00:13:10.778 user 0m33.623s 00:13:10.778 sys 0m4.714s 00:13:10.778 10:13:16 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.778 10:13:16 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:10.778 ************************************ 00:13:10.778 END TEST nvme_xnvme 00:13:10.778 ************************************ 00:13:10.778 10:13:16 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:10.778 10:13:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:10.778 10:13:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.778 10:13:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.778 ************************************ 00:13:10.778 START TEST blockdev_xnvme 00:13:10.778 ************************************ 00:13:10.778 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:10.778 * Looking for test storage... 00:13:10.778 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:10.778 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:10.778 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:10.778 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:11.036 10:13:16 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.036 --rc genhtml_branch_coverage=1 00:13:11.036 --rc genhtml_function_coverage=1 00:13:11.036 --rc genhtml_legend=1 00:13:11.036 --rc geninfo_all_blocks=1 00:13:11.036 --rc geninfo_unexecuted_blocks=1 00:13:11.036 00:13:11.036 ' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.036 --rc genhtml_branch_coverage=1 00:13:11.036 --rc genhtml_function_coverage=1 00:13:11.036 --rc genhtml_legend=1 00:13:11.036 --rc geninfo_all_blocks=1 00:13:11.036 --rc geninfo_unexecuted_blocks=1 00:13:11.036 00:13:11.036 ' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.036 --rc genhtml_branch_coverage=1 00:13:11.036 --rc genhtml_function_coverage=1 00:13:11.036 --rc genhtml_legend=1 00:13:11.036 --rc geninfo_all_blocks=1 00:13:11.036 --rc geninfo_unexecuted_blocks=1 00:13:11.036 00:13:11.036 ' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:11.036 --rc genhtml_branch_coverage=1 00:13:11.036 --rc genhtml_function_coverage=1 00:13:11.036 --rc genhtml_legend=1 00:13:11.036 --rc geninfo_all_blocks=1 00:13:11.036 --rc geninfo_unexecuted_blocks=1 00:13:11.036 00:13:11.036 ' 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69209 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69209 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69209 ']' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:11.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.036 10:13:16 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:11.036 10:13:16 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:11.037 10:13:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:11.037 [2024-11-04 10:13:16.624542] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:11.037 [2024-11-04 10:13:16.624664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69209 ] 00:13:11.294 [2024-11-04 10:13:16.779215] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:11.294 [2024-11-04 10:13:16.861721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.859 10:13:17 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:11.859 10:13:17 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:13:11.859 10:13:17 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:11.859 10:13:17 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:11.859 10:13:17 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:11.859 10:13:17 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:11.859 10:13:17 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:12.143 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:12.143 Waiting for block devices as requested 00:13:12.143 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.400 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.400 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:12.400 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:17.719 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:17.719 nvme0n1 00:13:17.719 nvme1n1 00:13:17.719 nvme2n1 00:13:17.719 nvme2n2 00:13:17.719 nvme2n3 00:13:17.719 nvme3n1 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.719 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.719 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "37b053b8-1dce-4972-8e1f-7a7f3917112f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b053b8-1dce-4972-8e1f-7a7f3917112f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "351ed538-9d5c-40ab-889d-2b933e6b8fc4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "351ed538-9d5c-40ab-889d-2b933e6b8fc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8af3a3d7-89bf-4c38-8773-6cc78e64b187"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8af3a3d7-89bf-4c38-8773-6cc78e64b187",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9f22f6f8-8c86-4681-8030-e1e50192f2f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f22f6f8-8c86-4681-8030-e1e50192f2f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "67730bb2-fa71-4eb9-8f5f-abfd5da52106"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67730bb2-fa71-4eb9-8f5f-abfd5da52106",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f3969cd4-d70d-4986-8281-c252274d5795"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f3969cd4-d70d-4986-8281-c252274d5795",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:17.720 10:13:23 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69209 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69209 ']' 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69209 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69209 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:17.720 killing process with pid 69209 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69209' 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69209 00:13:17.720 10:13:23 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69209 00:13:19.099 10:13:24 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:19.099 10:13:24 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:19.099 10:13:24 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:19.099 10:13:24 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:19.099 10:13:24 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:19.099 ************************************ 00:13:19.099 START TEST bdev_hello_world 00:13:19.099 ************************************ 00:13:19.099 10:13:24 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:19.099 [2024-11-04 10:13:24.567372] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:19.099 [2024-11-04 10:13:24.567507] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69564 ] 00:13:19.099 [2024-11-04 10:13:24.722301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.099 [2024-11-04 10:13:24.808645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.669 [2024-11-04 10:13:25.102753] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:19.669 [2024-11-04 10:13:25.102811] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:19.669 [2024-11-04 10:13:25.102825] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:19.669 [2024-11-04 10:13:25.104366] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:19.669 [2024-11-04 10:13:25.104527] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:19.669 [2024-11-04 10:13:25.104544] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:19.669 [2024-11-04 10:13:25.104812] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:19.669 00:13:19.669 [2024-11-04 10:13:25.104835] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:20.242 00:13:20.242 real 0m1.169s 00:13:20.242 user 0m0.906s 00:13:20.242 sys 0m0.152s 00:13:20.242 10:13:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:20.242 10:13:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 ************************************ 00:13:20.242 END TEST bdev_hello_world 00:13:20.242 ************************************ 00:13:20.242 10:13:25 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:20.242 10:13:25 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:20.242 10:13:25 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:20.242 10:13:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 ************************************ 00:13:20.242 START TEST bdev_bounds 00:13:20.242 ************************************ 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69595 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:20.242 Process bdevio pid: 69595 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69595' 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69595 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69595 ']' 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:20.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:20.242 10:13:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:20.242 [2024-11-04 10:13:25.792560] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:20.242 [2024-11-04 10:13:25.792696] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69595 ] 00:13:20.242 [2024-11-04 10:13:25.948651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:20.500 [2024-11-04 10:13:26.038899] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.500 [2024-11-04 10:13:26.039161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.500 [2024-11-04 10:13:26.039162] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.067 10:13:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:21.068 10:13:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:21.068 10:13:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:21.068 I/O targets: 00:13:21.068 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:21.068 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:21.068 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.068 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.068 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:21.068 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:21.068 00:13:21.068 00:13:21.068 CUnit - A unit testing framework for C - Version 2.1-3 00:13:21.068 http://cunit.sourceforge.net/ 00:13:21.068 00:13:21.068 00:13:21.068 Suite: bdevio tests on: nvme3n1 00:13:21.068 Test: blockdev write read block ...passed 00:13:21.068 Test: blockdev write zeroes read block ...passed 00:13:21.068 Test: blockdev write zeroes read no split ...passed 00:13:21.068 Test: blockdev write zeroes read split ...passed 00:13:21.068 Test: blockdev write zeroes read split partial ...passed 00:13:21.068 Test: blockdev reset ...passed 00:13:21.068 Test: blockdev write read 8 blocks ...passed 00:13:21.068 Test: blockdev write read size > 128k ...passed 00:13:21.068 Test: blockdev write read invalid size ...passed 00:13:21.068 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.068 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.068 Test: blockdev write read max offset ...passed 00:13:21.068 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.068 Test: blockdev writev readv 8 blocks ...passed 00:13:21.068 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.068 Test: blockdev writev readv block ...passed 00:13:21.068 Test: blockdev writev readv size > 128k ...passed 00:13:21.068 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.068 Test: blockdev comparev and writev ...passed 00:13:21.068 Test: blockdev nvme passthru rw ...passed 00:13:21.068 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.068 Test: blockdev nvme admin passthru ...passed 00:13:21.068 Test: blockdev copy ...passed 00:13:21.068 Suite: bdevio tests on: nvme2n3 00:13:21.068 Test: blockdev write read block ...passed 00:13:21.068 Test: blockdev write zeroes read block ...passed 00:13:21.068 Test: blockdev write zeroes read no split ...passed 00:13:21.068 Test: blockdev write zeroes read split ...passed 00:13:21.068 Test: blockdev write zeroes read split partial ...passed 00:13:21.068 Test: blockdev reset ...passed 00:13:21.068 Test: blockdev write read 8 blocks ...passed 00:13:21.068 Test: blockdev write read size > 128k ...passed 00:13:21.329 Test: blockdev write read invalid size ...passed 00:13:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.329 Test: blockdev write read max offset ...passed 00:13:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.329 Test: blockdev writev readv 8 blocks ...passed 00:13:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.329 Test: blockdev writev readv block ...passed 00:13:21.329 Test: blockdev writev readv size > 128k ...passed 00:13:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.329 Test: blockdev comparev and writev ...passed 00:13:21.329 Test: blockdev nvme passthru rw ...passed 00:13:21.329 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.329 Test: blockdev nvme admin passthru ...passed 00:13:21.329 Test: blockdev copy ...passed 00:13:21.329 Suite: bdevio tests on: nvme2n2 00:13:21.329 Test: blockdev write read block ...passed 00:13:21.329 Test: blockdev write zeroes read block ...passed 00:13:21.329 Test: blockdev write zeroes read no split ...passed 00:13:21.329 Test: blockdev write zeroes read split ...passed 00:13:21.329 Test: blockdev write zeroes read split partial ...passed 00:13:21.329 Test: blockdev reset ...passed 00:13:21.329 Test: blockdev write read 8 blocks ...passed 00:13:21.329 Test: blockdev write read size > 128k ...passed 00:13:21.329 Test: blockdev write read invalid size ...passed 00:13:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.329 Test: blockdev write read max offset ...passed 00:13:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.329 Test: blockdev writev readv 8 blocks ...passed 00:13:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.329 Test: blockdev writev readv block ...passed 00:13:21.329 Test: blockdev writev readv size > 128k ...passed 00:13:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.329 Test: blockdev comparev and writev ...passed 00:13:21.329 Test: blockdev nvme passthru rw ...passed 00:13:21.329 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.329 Test: blockdev nvme admin passthru ...passed 00:13:21.329 Test: blockdev copy ...passed 00:13:21.329 Suite: bdevio tests on: nvme2n1 00:13:21.329 Test: blockdev write read block ...passed 00:13:21.329 Test: blockdev write zeroes read block ...passed 00:13:21.329 Test: blockdev write zeroes read no split ...passed 00:13:21.329 Test: blockdev write zeroes read split ...passed 00:13:21.329 Test: blockdev write zeroes read split partial ...passed 00:13:21.329 Test: blockdev reset ...passed 00:13:21.329 Test: blockdev write read 8 blocks ...passed 00:13:21.329 Test: blockdev write read size > 128k ...passed 00:13:21.329 Test: blockdev write read invalid size ...passed 00:13:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.329 Test: blockdev write read max offset ...passed 00:13:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.329 Test: blockdev writev readv 8 blocks ...passed 00:13:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.329 Test: blockdev writev readv block ...passed 00:13:21.329 Test: blockdev writev readv size > 128k ...passed 00:13:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.329 Test: blockdev comparev and writev ...passed 00:13:21.329 Test: blockdev nvme passthru rw ...passed 00:13:21.329 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.329 Test: blockdev nvme admin passthru ...passed 00:13:21.329 Test: blockdev copy ...passed 00:13:21.329 Suite: bdevio tests on: nvme1n1 00:13:21.329 Test: blockdev write read block ...passed 00:13:21.329 Test: blockdev write zeroes read block ...passed 00:13:21.329 Test: blockdev write zeroes read no split ...passed 00:13:21.329 Test: blockdev write zeroes read split ...passed 00:13:21.329 Test: blockdev write zeroes read split partial ...passed 00:13:21.329 Test: blockdev reset ...passed 00:13:21.329 Test: blockdev write read 8 blocks ...passed 00:13:21.329 Test: blockdev write read size > 128k ...passed 00:13:21.329 Test: blockdev write read invalid size ...passed 00:13:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.329 Test: blockdev write read max offset ...passed 00:13:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.329 Test: blockdev writev readv 8 blocks ...passed 00:13:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.329 Test: blockdev writev readv block ...passed 00:13:21.329 Test: blockdev writev readv size > 128k ...passed 00:13:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.329 Test: blockdev comparev and writev ...passed 00:13:21.329 Test: blockdev nvme passthru rw ...passed 00:13:21.329 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.329 Test: blockdev nvme admin passthru ...passed 00:13:21.329 Test: blockdev copy ...passed 00:13:21.329 Suite: bdevio tests on: nvme0n1 00:13:21.329 Test: blockdev write read block ...passed 00:13:21.329 Test: blockdev write zeroes read block ...passed 00:13:21.329 Test: blockdev write zeroes read no split ...passed 00:13:21.329 Test: blockdev write zeroes read split ...passed 00:13:21.329 Test: blockdev write zeroes read split partial ...passed 00:13:21.329 Test: blockdev reset ...passed 00:13:21.329 Test: blockdev write read 8 blocks ...passed 00:13:21.329 Test: blockdev write read size > 128k ...passed 00:13:21.329 Test: blockdev write read invalid size ...passed 00:13:21.329 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:21.329 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:21.329 Test: blockdev write read max offset ...passed 00:13:21.329 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:21.329 Test: blockdev writev readv 8 blocks ...passed 00:13:21.329 Test: blockdev writev readv 30 x 1block ...passed 00:13:21.329 Test: blockdev writev readv block ...passed 00:13:21.329 Test: blockdev writev readv size > 128k ...passed 00:13:21.329 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:21.329 Test: blockdev comparev and writev ...passed 00:13:21.330 Test: blockdev nvme passthru rw ...passed 00:13:21.330 Test: blockdev nvme passthru vendor specific ...passed 00:13:21.330 Test: blockdev nvme admin passthru ...passed 00:13:21.330 Test: blockdev copy ...passed 00:13:21.330 00:13:21.330 Run Summary: Type Total Ran Passed Failed Inactive 00:13:21.330 suites 6 6 n/a 0 0 00:13:21.330 tests 138 138 138 0 0 00:13:21.330 asserts 780 780 780 0 n/a 00:13:21.330 00:13:21.330 Elapsed time = 0.863 seconds 00:13:21.330 0 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69595 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69595 ']' 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69595 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69595 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:21.330 killing process with pid 69595 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69595' 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69595 00:13:21.330 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69595 00:13:21.897 10:13:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:21.897 00:13:21.897 real 0m1.905s 00:13:21.897 user 0m4.819s 00:13:21.897 sys 0m0.266s 00:13:21.897 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:21.897 ************************************ 00:13:21.897 END TEST bdev_bounds 00:13:21.897 ************************************ 00:13:21.897 10:13:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 10:13:27 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.155 10:13:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:22.155 10:13:27 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:22.155 10:13:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 ************************************ 00:13:22.155 START TEST bdev_nbd 00:13:22.155 ************************************ 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69651 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69651 /var/tmp/spdk-nbd.sock 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69651 ']' 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:22.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:22.155 10:13:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:22.155 [2024-11-04 10:13:27.769497] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:22.155 [2024-11-04 10:13:27.769617] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.414 [2024-11-04 10:13:27.928392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.414 [2024-11-04 10:13:28.031189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:22.977 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.237 1+0 records in 00:13:23.237 1+0 records out 00:13:23.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277104 s, 14.8 MB/s 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.237 1+0 records in 00:13:23.237 1+0 records out 00:13:23.237 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00153773 s, 2.7 MB/s 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.237 10:13:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.495 1+0 records in 00:13:23.495 1+0 records out 00:13:23.495 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335466 s, 12.2 MB/s 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.495 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:23.753 1+0 records in 00:13:23.753 1+0 records out 00:13:23.753 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295648 s, 13.9 MB/s 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:23.753 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.011 1+0 records in 00:13:24.011 1+0 records out 00:13:24.011 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267177 s, 15.3 MB/s 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.011 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:24.269 1+0 records in 00:13:24.269 1+0 records out 00:13:24.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360528 s, 11.4 MB/s 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:24.269 10:13:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd0", 00:13:24.527 "bdev_name": "nvme0n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd1", 00:13:24.527 "bdev_name": "nvme1n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd2", 00:13:24.527 "bdev_name": "nvme2n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd3", 00:13:24.527 "bdev_name": "nvme2n2" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd4", 00:13:24.527 "bdev_name": "nvme2n3" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd5", 00:13:24.527 "bdev_name": "nvme3n1" 00:13:24.527 } 00:13:24.527 ]' 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd0", 00:13:24.527 "bdev_name": "nvme0n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd1", 00:13:24.527 "bdev_name": "nvme1n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd2", 00:13:24.527 "bdev_name": "nvme2n1" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd3", 00:13:24.527 "bdev_name": "nvme2n2" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd4", 00:13:24.527 "bdev_name": "nvme2n3" 00:13:24.527 }, 00:13:24.527 { 00:13:24.527 "nbd_device": "/dev/nbd5", 00:13:24.527 "bdev_name": "nvme3n1" 00:13:24.527 } 00:13:24.527 ]' 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.527 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:24.786 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.044 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.045 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:25.045 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.045 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.045 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.045 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.303 10:13:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:25.561 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:25.820 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:26.078 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.079 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:26.336 /dev/nbd0 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.336 1+0 records in 00:13:26.336 1+0 records out 00:13:26.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000392136 s, 10.4 MB/s 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:26.336 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.337 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:26.337 10:13:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:26.337 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.337 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.337 10:13:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:26.599 /dev/nbd1 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.599 1+0 records in 00:13:26.599 1+0 records out 00:13:26.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445529 s, 9.2 MB/s 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:26.599 /dev/nbd10 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:26.599 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.857 1+0 records in 00:13:26.857 1+0 records out 00:13:26.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421488 s, 9.7 MB/s 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:26.857 /dev/nbd11 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:26.857 1+0 records in 00:13:26.857 1+0 records out 00:13:26.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000611827 s, 6.7 MB/s 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:26.857 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:27.116 /dev/nbd12 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.116 1+0 records in 00:13:27.116 1+0 records out 00:13:27.116 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000294898 s, 13.9 MB/s 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.116 10:13:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:27.374 /dev/nbd13 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:27.374 1+0 records in 00:13:27.374 1+0 records out 00:13:27.374 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429605 s, 9.5 MB/s 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:27.374 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd0", 00:13:27.632 "bdev_name": "nvme0n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd1", 00:13:27.632 "bdev_name": "nvme1n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd10", 00:13:27.632 "bdev_name": "nvme2n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd11", 00:13:27.632 "bdev_name": "nvme2n2" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd12", 00:13:27.632 "bdev_name": "nvme2n3" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd13", 00:13:27.632 "bdev_name": "nvme3n1" 00:13:27.632 } 00:13:27.632 ]' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd0", 00:13:27.632 "bdev_name": "nvme0n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd1", 00:13:27.632 "bdev_name": "nvme1n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd10", 00:13:27.632 "bdev_name": "nvme2n1" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd11", 00:13:27.632 "bdev_name": "nvme2n2" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd12", 00:13:27.632 "bdev_name": "nvme2n3" 00:13:27.632 }, 00:13:27.632 { 00:13:27.632 "nbd_device": "/dev/nbd13", 00:13:27.632 "bdev_name": "nvme3n1" 00:13:27.632 } 00:13:27.632 ]' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:27.632 /dev/nbd1 00:13:27.632 /dev/nbd10 00:13:27.632 /dev/nbd11 00:13:27.632 /dev/nbd12 00:13:27.632 /dev/nbd13' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:27.632 /dev/nbd1 00:13:27.632 /dev/nbd10 00:13:27.632 /dev/nbd11 00:13:27.632 /dev/nbd12 00:13:27.632 /dev/nbd13' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:27.632 256+0 records in 00:13:27.632 256+0 records out 00:13:27.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103979 s, 101 MB/s 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:27.632 256+0 records in 00:13:27.632 256+0 records out 00:13:27.632 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0600278 s, 17.5 MB/s 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.632 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:27.890 256+0 records in 00:13:27.890 256+0 records out 00:13:27.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0704648 s, 14.9 MB/s 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:27.890 256+0 records in 00:13:27.890 256+0 records out 00:13:27.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0597528 s, 17.5 MB/s 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:27.890 256+0 records in 00:13:27.890 256+0 records out 00:13:27.890 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.059002 s, 17.8 MB/s 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:27.890 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:28.148 256+0 records in 00:13:28.148 256+0 records out 00:13:28.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0699157 s, 15.0 MB/s 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:28.148 256+0 records in 00:13:28.148 256+0 records out 00:13:28.148 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0713937 s, 14.7 MB/s 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.148 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.149 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.407 10:13:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.665 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:28.922 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:28.923 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:29.181 10:13:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.440 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:29.727 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:29.728 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:29.985 malloc_lvol_verify 00:13:29.985 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:29.985 591dc73b-40fc-4d3b-9cfe-4c29ea7d33d4 00:13:30.243 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:30.243 001414bb-b5ca-4f0e-b8f9-1417aa759e3f 00:13:30.243 10:13:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:30.501 /dev/nbd0 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:30.501 mke2fs 1.47.0 (5-Feb-2023) 00:13:30.501 Discarding device blocks: 0/4096 done 00:13:30.501 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:30.501 00:13:30.501 Allocating group tables: 0/1 done 00:13:30.501 Writing inode tables: 0/1 done 00:13:30.501 Creating journal (1024 blocks): done 00:13:30.501 Writing superblocks and filesystem accounting information: 0/1 done 00:13:30.501 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.501 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69651 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69651 ']' 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69651 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69651 00:13:30.759 killing process with pid 69651 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69651' 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69651 00:13:30.759 10:13:36 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69651 00:13:31.692 10:13:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:31.692 00:13:31.692 real 0m9.470s 00:13:31.692 user 0m13.431s 00:13:31.692 sys 0m3.172s 00:13:31.692 10:13:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:31.692 ************************************ 00:13:31.692 END TEST bdev_nbd 00:13:31.692 ************************************ 00:13:31.692 10:13:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:31.692 10:13:37 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:31.692 10:13:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:31.692 10:13:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:31.692 10:13:37 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:31.692 10:13:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:31.692 10:13:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.692 10:13:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:31.692 ************************************ 00:13:31.692 START TEST bdev_fio 00:13:31.692 ************************************ 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:31.692 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:31.692 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:31.693 ************************************ 00:13:31.693 START TEST bdev_fio_rw_verify 00:13:31.693 ************************************ 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:31.693 10:13:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:31.951 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:31.951 fio-3.35 00:13:31.951 Starting 6 threads 00:13:44.144 00:13:44.144 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70046: Mon Nov 4 10:13:48 2024 00:13:44.144 read: IOPS=43.4k, BW=169MiB/s (178MB/s)(1694MiB/10001msec) 00:13:44.144 slat (usec): min=2, max=3634, avg= 4.57, stdev= 6.77 00:13:44.144 clat (usec): min=74, max=14217, avg=386.49, stdev=207.73 00:13:44.144 lat (usec): min=77, max=14224, avg=391.06, stdev=208.19 00:13:44.144 clat percentiles (usec): 00:13:44.144 | 50.000th=[ 359], 99.000th=[ 979], 99.900th=[ 1467], 99.990th=[ 3818], 00:13:44.144 | 99.999th=[14222] 00:13:44.144 write: IOPS=43.7k, BW=171MiB/s (179MB/s)(1708MiB/10001msec); 0 zone resets 00:13:44.144 slat (usec): min=10, max=1578, avg=21.69, stdev=30.03 00:13:44.144 clat (usec): min=50, max=6856, avg=504.64, stdev=215.87 00:13:44.144 lat (usec): min=76, max=6899, avg=526.33, stdev=219.26 00:13:44.144 clat percentiles (usec): 00:13:44.144 | 50.000th=[ 478], 99.000th=[ 1139], 99.900th=[ 1631], 99.990th=[ 2442], 00:13:44.144 | 99.999th=[ 6652] 00:13:44.144 bw ( KiB/s): min=150920, max=196928, per=100.00%, avg=175110.63, stdev=2239.35, samples=114 00:13:44.144 iops : min=37730, max=49232, avg=43777.16, stdev=559.82, samples=114 00:13:44.144 lat (usec) : 100=0.12%, 250=16.81%, 500=49.14%, 750=25.79%, 1000=6.45% 00:13:44.144 lat (msec) : 2=1.65%, 4=0.03%, 10=0.01%, 20=0.01% 00:13:44.144 cpu : usr=53.80%, sys=28.50%, ctx=10872, majf=0, minf=34573 00:13:44.144 IO depths : 1=11.8%, 2=24.1%, 4=50.8%, 8=13.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:44.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.144 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:44.144 issued rwts: total=433787,437260,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:44.144 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:44.144 00:13:44.144 Run status group 0 (all jobs): 00:13:44.144 READ: bw=169MiB/s (178MB/s), 169MiB/s-169MiB/s (178MB/s-178MB/s), io=1694MiB (1777MB), run=10001-10001msec 00:13:44.144 WRITE: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=1708MiB (1791MB), run=10001-10001msec 00:13:44.144 ----------------------------------------------------- 00:13:44.144 Suppressions used: 00:13:44.144 count bytes template 00:13:44.144 6 48 /usr/src/fio/parse.c 00:13:44.144 3164 303744 /usr/src/fio/iolog.c 00:13:44.144 1 8 libtcmalloc_minimal.so 00:13:44.144 1 904 libcrypto.so 00:13:44.144 ----------------------------------------------------- 00:13:44.144 00:13:44.144 00:13:44.144 real 0m11.873s 00:13:44.144 user 0m33.753s 00:13:44.144 sys 0m17.369s 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:44.144 ************************************ 00:13:44.144 END TEST bdev_fio_rw_verify 00:13:44.144 ************************************ 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:44.144 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "37b053b8-1dce-4972-8e1f-7a7f3917112f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "37b053b8-1dce-4972-8e1f-7a7f3917112f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "351ed538-9d5c-40ab-889d-2b933e6b8fc4"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "351ed538-9d5c-40ab-889d-2b933e6b8fc4",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "8af3a3d7-89bf-4c38-8773-6cc78e64b187"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "8af3a3d7-89bf-4c38-8773-6cc78e64b187",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "9f22f6f8-8c86-4681-8030-e1e50192f2f8"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9f22f6f8-8c86-4681-8030-e1e50192f2f8",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "67730bb2-fa71-4eb9-8f5f-abfd5da52106"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "67730bb2-fa71-4eb9-8f5f-abfd5da52106",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "f3969cd4-d70d-4986-8281-c252274d5795"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "f3969cd4-d70d-4986-8281-c252274d5795",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:44.145 /home/vagrant/spdk_repo/spdk 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:44.145 00:13:44.145 real 0m12.006s 00:13:44.145 user 0m33.820s 00:13:44.145 sys 0m17.438s 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.145 10:13:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 ************************************ 00:13:44.145 END TEST bdev_fio 00:13:44.145 ************************************ 00:13:44.145 10:13:49 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:44.145 10:13:49 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:44.145 10:13:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:44.145 10:13:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:44.145 10:13:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:44.145 ************************************ 00:13:44.145 START TEST bdev_verify 00:13:44.145 ************************************ 00:13:44.145 10:13:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:44.145 [2024-11-04 10:13:49.297025] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:44.145 [2024-11-04 10:13:49.297118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70217 ] 00:13:44.145 [2024-11-04 10:13:49.447261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:44.145 [2024-11-04 10:13:49.530264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.145 [2024-11-04 10:13:49.530369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.145 Running I/O for 5 seconds... 00:13:46.453 25472.00 IOPS, 99.50 MiB/s [2024-11-04T10:13:53.135Z] 25152.00 IOPS, 98.25 MiB/s [2024-11-04T10:13:54.079Z] 25088.00 IOPS, 98.00 MiB/s [2024-11-04T10:13:55.021Z] 24880.00 IOPS, 97.19 MiB/s [2024-11-04T10:13:55.021Z] 24307.20 IOPS, 94.95 MiB/s 00:13:49.276 Latency(us) 00:13:49.276 [2024-11-04T10:13:55.021Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:49.276 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0xa0000 00:13:49.276 nvme0n1 : 5.06 1821.89 7.12 0.00 0.00 70143.67 8771.74 69367.34 00:13:49.276 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0xa0000 length 0xa0000 00:13:49.276 nvme0n1 : 5.04 1727.94 6.75 0.00 0.00 73950.10 10132.87 69367.34 00:13:49.276 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0xbd0bd 00:13:49.276 nvme1n1 : 5.07 3081.96 12.04 0.00 0.00 41368.70 3906.95 61301.37 00:13:49.276 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:49.276 nvme1n1 : 5.04 3082.95 12.04 0.00 0.00 41342.78 2835.69 55251.89 00:13:49.276 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0x80000 00:13:49.276 nvme2n1 : 5.04 1852.66 7.24 0.00 0.00 68610.73 7461.02 64124.46 00:13:49.276 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x80000 length 0x80000 00:13:49.276 nvme2n1 : 5.06 1770.42 6.92 0.00 0.00 71909.24 11191.53 66947.54 00:13:49.276 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0x80000 00:13:49.276 nvme2n2 : 5.05 1824.48 7.13 0.00 0.00 69505.13 13712.15 64931.05 00:13:49.276 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x80000 length 0x80000 00:13:49.276 nvme2n2 : 5.05 1748.57 6.83 0.00 0.00 72714.35 7461.02 64931.05 00:13:49.276 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0x80000 00:13:49.276 nvme2n3 : 5.07 1842.28 7.20 0.00 0.00 68729.35 8721.33 64931.05 00:13:49.276 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x80000 length 0x80000 00:13:49.276 nvme2n3 : 5.06 1747.14 6.82 0.00 0.00 72635.26 12098.95 64124.46 00:13:49.276 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x0 length 0x20000 00:13:49.276 nvme3n1 : 5.08 1840.92 7.19 0.00 0.00 68679.62 6452.78 67350.84 00:13:49.276 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:49.276 Verification LBA range: start 0x20000 length 0x20000 00:13:49.276 nvme3n1 : 5.06 1746.64 6.82 0.00 0.00 72535.44 10737.82 66947.54 00:13:49.276 [2024-11-04T10:13:55.021Z] =================================================================================================================== 00:13:49.276 [2024-11-04T10:13:55.021Z] Total : 24087.84 94.09 0.00 0.00 63332.21 2835.69 69367.34 00:13:50.217 00:13:50.217 real 0m6.486s 00:13:50.217 user 0m10.406s 00:13:50.217 sys 0m1.595s 00:13:50.217 10:13:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:50.217 10:13:55 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:50.217 ************************************ 00:13:50.217 END TEST bdev_verify 00:13:50.217 ************************************ 00:13:50.217 10:13:55 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.217 10:13:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:50.217 10:13:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.217 10:13:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:50.217 ************************************ 00:13:50.217 START TEST bdev_verify_big_io 00:13:50.217 ************************************ 00:13:50.217 10:13:55 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:50.217 [2024-11-04 10:13:55.874386] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:50.217 [2024-11-04 10:13:55.874539] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70318 ] 00:13:50.477 [2024-11-04 10:13:56.039661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:50.478 [2024-11-04 10:13:56.161168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:50.478 [2024-11-04 10:13:56.161292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.047 Running I/O for 5 seconds... 00:13:56.867 1504.00 IOPS, 94.00 MiB/s [2024-11-04T10:14:02.869Z] 2440.00 IOPS, 152.50 MiB/s 00:13:57.124 Latency(us) 00:13:57.124 [2024-11-04T10:14:02.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:57.124 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0x0 length 0xa000 00:13:57.124 nvme0n1 : 5.95 104.89 6.56 0.00 0.00 1164132.32 75013.51 1329271.73 00:13:57.124 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0xa000 length 0xa000 00:13:57.124 nvme0n1 : 6.05 115.03 7.19 0.00 0.00 1032681.64 183904.10 1348630.06 00:13:57.124 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0x0 length 0xbd0b 00:13:57.124 nvme1n1 : 6.05 116.34 7.27 0.00 0.00 1030158.32 104051.00 1742249.35 00:13:57.124 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0xbd0b length 0xbd0b 00:13:57.124 nvme1n1 : 5.74 80.84 5.05 0.00 0.00 1475339.49 8116.38 2890843.37 00:13:57.124 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0x0 length 0x8000 00:13:57.124 nvme2n1 : 6.05 124.23 7.76 0.00 0.00 925085.42 100421.32 1129235.69 00:13:57.124 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.124 Verification LBA range: start 0x8000 length 0x8000 00:13:57.124 nvme2n1 : 6.05 100.45 6.28 0.00 0.00 1139145.03 200036.04 1064707.94 00:13:57.124 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x0 length 0x8000 00:13:57.125 nvme2n2 : 6.06 103.02 6.44 0.00 0.00 1069134.94 2974.33 1497043.89 00:13:57.125 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x8000 length 0x8000 00:13:57.125 nvme2n2 : 6.15 119.76 7.48 0.00 0.00 934023.89 88725.66 1497043.89 00:13:57.125 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x0 length 0x8000 00:13:57.125 nvme2n3 : 6.06 116.16 7.26 0.00 0.00 926547.28 27021.00 1238932.87 00:13:57.125 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x8000 length 0x8000 00:13:57.125 nvme2n3 : 6.14 132.97 8.31 0.00 0.00 806606.31 79853.10 1019538.51 00:13:57.125 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x0 length 0x2000 00:13:57.125 nvme3n1 : 6.15 153.50 9.59 0.00 0.00 680853.82 4133.81 1000180.18 00:13:57.125 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:13:57.125 Verification LBA range: start 0x2000 length 0x2000 00:13:57.125 nvme3n1 : 6.15 156.12 9.76 0.00 0.00 675566.93 2029.10 1071160.71 00:13:57.125 [2024-11-04T10:14:02.870Z] =================================================================================================================== 00:13:57.125 [2024-11-04T10:14:02.870Z] Total : 1423.30 88.96 0.00 0.00 949901.58 2029.10 2890843.37 00:13:58.061 00:13:58.061 real 0m7.846s 00:13:58.061 user 0m14.537s 00:13:58.061 sys 0m0.368s 00:13:58.061 10:14:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:58.061 10:14:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.061 ************************************ 00:13:58.061 END TEST bdev_verify_big_io 00:13:58.061 ************************************ 00:13:58.061 10:14:03 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.061 10:14:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:13:58.061 10:14:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:58.061 10:14:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:58.061 ************************************ 00:13:58.061 START TEST bdev_write_zeroes 00:13:58.061 ************************************ 00:13:58.061 10:14:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:13:58.061 [2024-11-04 10:14:03.754278] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:13:58.061 [2024-11-04 10:14:03.754396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70423 ] 00:13:58.330 [2024-11-04 10:14:03.911147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.330 [2024-11-04 10:14:04.009288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.897 Running I/O for 1 seconds... 00:13:59.830 78400.00 IOPS, 306.25 MiB/s 00:13:59.830 Latency(us) 00:13:59.830 [2024-11-04T10:14:05.575Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:59.830 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme0n1 : 1.02 11216.84 43.82 0.00 0.00 11401.66 7763.50 24097.08 00:13:59.830 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme1n1 : 1.02 21906.78 85.57 0.00 0.00 5831.79 3503.66 11998.13 00:13:59.830 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme2n1 : 1.02 11160.32 43.60 0.00 0.00 11400.65 6704.84 23492.14 00:13:59.830 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme2n2 : 1.02 11147.63 43.55 0.00 0.00 11407.86 6906.49 23693.78 00:13:59.830 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme2n3 : 1.02 11135.05 43.50 0.00 0.00 11411.68 7158.55 23996.26 00:13:59.830 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:13:59.830 nvme3n1 : 1.02 11122.53 43.45 0.00 0.00 11417.08 7360.20 24197.91 00:13:59.830 [2024-11-04T10:14:05.575Z] =================================================================================================================== 00:13:59.830 [2024-11-04T10:14:05.575Z] Total : 77689.16 303.47 0.00 0.00 9837.15 3503.66 24197.91 00:14:00.395 00:14:00.395 real 0m2.423s 00:14:00.395 user 0m1.678s 00:14:00.395 sys 0m0.593s 00:14:00.395 10:14:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:00.395 10:14:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:00.395 ************************************ 00:14:00.395 END TEST bdev_write_zeroes 00:14:00.395 ************************************ 00:14:00.653 10:14:06 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.653 10:14:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:00.653 10:14:06 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:00.653 10:14:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.653 ************************************ 00:14:00.653 START TEST bdev_json_nonenclosed 00:14:00.653 ************************************ 00:14:00.653 10:14:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:00.653 [2024-11-04 10:14:06.226176] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:14:00.653 [2024-11-04 10:14:06.226292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70472 ] 00:14:00.653 [2024-11-04 10:14:06.384429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.911 [2024-11-04 10:14:06.484758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.911 [2024-11-04 10:14:06.484852] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:00.911 [2024-11-04 10:14:06.484871] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:00.911 [2024-11-04 10:14:06.484880] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.169 00:14:01.169 real 0m0.498s 00:14:01.169 user 0m0.306s 00:14:01.169 sys 0m0.089s 00:14:01.169 10:14:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.169 10:14:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:01.169 ************************************ 00:14:01.169 END TEST bdev_json_nonenclosed 00:14:01.169 ************************************ 00:14:01.169 10:14:06 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.169 10:14:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:01.169 10:14:06 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:01.169 10:14:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:01.169 ************************************ 00:14:01.169 START TEST bdev_json_nonarray 00:14:01.169 ************************************ 00:14:01.169 10:14:06 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:01.169 [2024-11-04 10:14:06.762248] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:14:01.169 [2024-11-04 10:14:06.762365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70497 ] 00:14:01.428 [2024-11-04 10:14:06.919363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.428 [2024-11-04 10:14:07.019690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.428 [2024-11-04 10:14:07.019772] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:01.429 [2024-11-04 10:14:07.019801] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:01.429 [2024-11-04 10:14:07.019810] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:01.709 00:14:01.709 real 0m0.494s 00:14:01.709 user 0m0.298s 00:14:01.709 sys 0m0.092s 00:14:01.709 10:14:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.709 ************************************ 00:14:01.709 END TEST bdev_json_nonarray 00:14:01.709 ************************************ 00:14:01.709 10:14:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:01.709 10:14:07 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:01.967 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:34.114 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.183 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.183 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.183 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:52.183 00:14:52.183 real 1m41.272s 00:14:52.183 user 1m32.520s 00:14:52.183 sys 2m39.426s 00:14:52.183 10:14:57 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:52.183 10:14:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:52.183 ************************************ 00:14:52.183 END TEST blockdev_xnvme 00:14:52.183 ************************************ 00:14:52.183 10:14:57 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:52.183 10:14:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:52.183 10:14:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.183 10:14:57 -- common/autotest_common.sh@10 -- # set +x 00:14:52.183 ************************************ 00:14:52.183 START TEST ublk 00:14:52.183 ************************************ 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:52.183 * Looking for test storage... 00:14:52.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:52.183 10:14:57 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:52.183 10:14:57 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:52.183 10:14:57 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:52.183 10:14:57 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:52.183 10:14:57 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:52.183 10:14:57 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:52.183 10:14:57 ublk -- scripts/common.sh@345 -- # : 1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:52.183 10:14:57 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:52.183 10:14:57 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@353 -- # local d=1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:52.183 10:14:57 ublk -- scripts/common.sh@355 -- # echo 1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:52.183 10:14:57 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@353 -- # local d=2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:52.183 10:14:57 ublk -- scripts/common.sh@355 -- # echo 2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:52.183 10:14:57 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:52.183 10:14:57 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:52.183 10:14:57 ublk -- scripts/common.sh@368 -- # return 0 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.183 --rc genhtml_branch_coverage=1 00:14:52.183 --rc genhtml_function_coverage=1 00:14:52.183 --rc genhtml_legend=1 00:14:52.183 --rc geninfo_all_blocks=1 00:14:52.183 --rc geninfo_unexecuted_blocks=1 00:14:52.183 00:14:52.183 ' 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.183 --rc genhtml_branch_coverage=1 00:14:52.183 --rc genhtml_function_coverage=1 00:14:52.183 --rc genhtml_legend=1 00:14:52.183 --rc geninfo_all_blocks=1 00:14:52.183 --rc geninfo_unexecuted_blocks=1 00:14:52.183 00:14:52.183 ' 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.183 --rc genhtml_branch_coverage=1 00:14:52.183 --rc genhtml_function_coverage=1 00:14:52.183 --rc genhtml_legend=1 00:14:52.183 --rc geninfo_all_blocks=1 00:14:52.183 --rc geninfo_unexecuted_blocks=1 00:14:52.183 00:14:52.183 ' 00:14:52.183 10:14:57 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:52.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:52.183 --rc genhtml_branch_coverage=1 00:14:52.183 --rc genhtml_function_coverage=1 00:14:52.183 --rc genhtml_legend=1 00:14:52.183 --rc geninfo_all_blocks=1 00:14:52.183 --rc geninfo_unexecuted_blocks=1 00:14:52.183 00:14:52.183 ' 00:14:52.183 10:14:57 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:52.183 10:14:57 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:52.183 10:14:57 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:52.183 10:14:57 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:52.183 10:14:57 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:52.184 10:14:57 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:52.184 10:14:57 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:52.184 10:14:57 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:52.184 10:14:57 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:52.184 10:14:57 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:52.184 10:14:57 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:52.184 10:14:57 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:52.184 10:14:57 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:52.184 ************************************ 00:14:52.184 START TEST test_save_ublk_config 00:14:52.184 ************************************ 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70799 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70799 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70799 ']' 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:52.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:52.184 10:14:57 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:52.442 [2024-11-04 10:14:57.928044] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:14:52.442 [2024-11-04 10:14:57.928154] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70799 ] 00:14:52.442 [2024-11-04 10:14:58.087668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.442 [2024-11-04 10:14:58.184850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:53.375 [2024-11-04 10:14:58.790810] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:53.375 [2024-11-04 10:14:58.791605] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:53.375 malloc0 00:14:53.375 [2024-11-04 10:14:58.847255] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:53.375 [2024-11-04 10:14:58.847326] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:53.375 [2024-11-04 10:14:58.847336] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:53.375 [2024-11-04 10:14:58.847346] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:53.375 [2024-11-04 10:14:58.855869] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:53.375 [2024-11-04 10:14:58.855889] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:53.375 [2024-11-04 10:14:58.862813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:53.375 [2024-11-04 10:14:58.862903] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:53.375 [2024-11-04 10:14:58.879814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:53.375 0 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.375 10:14:58 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:53.633 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.633 10:14:59 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:53.633 "subsystems": [ 00:14:53.633 { 00:14:53.633 "subsystem": "fsdev", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "fsdev_set_opts", 00:14:53.633 "params": { 00:14:53.633 "fsdev_io_pool_size": 65535, 00:14:53.633 "fsdev_io_cache_size": 256 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "keyring", 00:14:53.633 "config": [] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "iobuf", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "iobuf_set_options", 00:14:53.633 "params": { 00:14:53.633 "small_pool_count": 8192, 00:14:53.633 "large_pool_count": 1024, 00:14:53.633 "small_bufsize": 8192, 00:14:53.633 "large_bufsize": 135168, 00:14:53.633 "enable_numa": false 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "sock", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "sock_set_default_impl", 00:14:53.633 "params": { 00:14:53.633 "impl_name": "posix" 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "sock_impl_set_options", 00:14:53.633 "params": { 00:14:53.633 "impl_name": "ssl", 00:14:53.633 "recv_buf_size": 4096, 00:14:53.633 "send_buf_size": 4096, 00:14:53.633 "enable_recv_pipe": true, 00:14:53.633 "enable_quickack": false, 00:14:53.633 "enable_placement_id": 0, 00:14:53.633 "enable_zerocopy_send_server": true, 00:14:53.633 "enable_zerocopy_send_client": false, 00:14:53.633 "zerocopy_threshold": 0, 00:14:53.633 "tls_version": 0, 00:14:53.633 "enable_ktls": false 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "sock_impl_set_options", 00:14:53.633 "params": { 00:14:53.633 "impl_name": "posix", 00:14:53.633 "recv_buf_size": 2097152, 00:14:53.633 "send_buf_size": 2097152, 00:14:53.633 "enable_recv_pipe": true, 00:14:53.633 "enable_quickack": false, 00:14:53.633 "enable_placement_id": 0, 00:14:53.633 "enable_zerocopy_send_server": true, 00:14:53.633 "enable_zerocopy_send_client": false, 00:14:53.633 "zerocopy_threshold": 0, 00:14:53.633 "tls_version": 0, 00:14:53.633 "enable_ktls": false 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "vmd", 00:14:53.633 "config": [] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "accel", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "accel_set_options", 00:14:53.633 "params": { 00:14:53.633 "small_cache_size": 128, 00:14:53.633 "large_cache_size": 16, 00:14:53.633 "task_count": 2048, 00:14:53.633 "sequence_count": 2048, 00:14:53.633 "buf_count": 2048 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "bdev", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "bdev_set_options", 00:14:53.633 "params": { 00:14:53.633 "bdev_io_pool_size": 65535, 00:14:53.633 "bdev_io_cache_size": 256, 00:14:53.633 "bdev_auto_examine": true, 00:14:53.633 "iobuf_small_cache_size": 128, 00:14:53.633 "iobuf_large_cache_size": 16 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_raid_set_options", 00:14:53.633 "params": { 00:14:53.633 "process_window_size_kb": 1024, 00:14:53.633 "process_max_bandwidth_mb_sec": 0 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_iscsi_set_options", 00:14:53.633 "params": { 00:14:53.633 "timeout_sec": 30 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_nvme_set_options", 00:14:53.633 "params": { 00:14:53.633 "action_on_timeout": "none", 00:14:53.633 "timeout_us": 0, 00:14:53.633 "timeout_admin_us": 0, 00:14:53.633 "keep_alive_timeout_ms": 10000, 00:14:53.633 "arbitration_burst": 0, 00:14:53.633 "low_priority_weight": 0, 00:14:53.633 "medium_priority_weight": 0, 00:14:53.633 "high_priority_weight": 0, 00:14:53.633 "nvme_adminq_poll_period_us": 10000, 00:14:53.633 "nvme_ioq_poll_period_us": 0, 00:14:53.633 "io_queue_requests": 0, 00:14:53.633 "delay_cmd_submit": true, 00:14:53.633 "transport_retry_count": 4, 00:14:53.633 "bdev_retry_count": 3, 00:14:53.633 "transport_ack_timeout": 0, 00:14:53.633 "ctrlr_loss_timeout_sec": 0, 00:14:53.633 "reconnect_delay_sec": 0, 00:14:53.633 "fast_io_fail_timeout_sec": 0, 00:14:53.633 "disable_auto_failback": false, 00:14:53.633 "generate_uuids": false, 00:14:53.633 "transport_tos": 0, 00:14:53.633 "nvme_error_stat": false, 00:14:53.633 "rdma_srq_size": 0, 00:14:53.633 "io_path_stat": false, 00:14:53.633 "allow_accel_sequence": false, 00:14:53.633 "rdma_max_cq_size": 0, 00:14:53.633 "rdma_cm_event_timeout_ms": 0, 00:14:53.633 "dhchap_digests": [ 00:14:53.633 "sha256", 00:14:53.633 "sha384", 00:14:53.633 "sha512" 00:14:53.633 ], 00:14:53.633 "dhchap_dhgroups": [ 00:14:53.633 "null", 00:14:53.633 "ffdhe2048", 00:14:53.633 "ffdhe3072", 00:14:53.633 "ffdhe4096", 00:14:53.633 "ffdhe6144", 00:14:53.633 "ffdhe8192" 00:14:53.633 ] 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_nvme_set_hotplug", 00:14:53.633 "params": { 00:14:53.633 "period_us": 100000, 00:14:53.633 "enable": false 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_malloc_create", 00:14:53.633 "params": { 00:14:53.633 "name": "malloc0", 00:14:53.633 "num_blocks": 8192, 00:14:53.633 "block_size": 4096, 00:14:53.633 "physical_block_size": 4096, 00:14:53.633 "uuid": "74e135a7-19f9-44b4-a4d8-ecf84cb23a54", 00:14:53.633 "optimal_io_boundary": 0, 00:14:53.633 "md_size": 0, 00:14:53.633 "dif_type": 0, 00:14:53.633 "dif_is_head_of_md": false, 00:14:53.633 "dif_pi_format": 0 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "bdev_wait_for_examine" 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "scsi", 00:14:53.633 "config": null 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "scheduler", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "framework_set_scheduler", 00:14:53.633 "params": { 00:14:53.633 "name": "static" 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "vhost_scsi", 00:14:53.633 "config": [] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "vhost_blk", 00:14:53.633 "config": [] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "ublk", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "ublk_create_target", 00:14:53.633 "params": { 00:14:53.633 "cpumask": "1" 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "ublk_start_disk", 00:14:53.633 "params": { 00:14:53.633 "bdev_name": "malloc0", 00:14:53.633 "ublk_id": 0, 00:14:53.633 "num_queues": 1, 00:14:53.633 "queue_depth": 128 00:14:53.633 } 00:14:53.633 } 00:14:53.633 ] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "nbd", 00:14:53.633 "config": [] 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "subsystem": "nvmf", 00:14:53.633 "config": [ 00:14:53.633 { 00:14:53.633 "method": "nvmf_set_config", 00:14:53.633 "params": { 00:14:53.633 "discovery_filter": "match_any", 00:14:53.633 "admin_cmd_passthru": { 00:14:53.633 "identify_ctrlr": false 00:14:53.633 }, 00:14:53.633 "dhchap_digests": [ 00:14:53.633 "sha256", 00:14:53.633 "sha384", 00:14:53.633 "sha512" 00:14:53.633 ], 00:14:53.633 "dhchap_dhgroups": [ 00:14:53.633 "null", 00:14:53.633 "ffdhe2048", 00:14:53.633 "ffdhe3072", 00:14:53.633 "ffdhe4096", 00:14:53.633 "ffdhe6144", 00:14:53.633 "ffdhe8192" 00:14:53.633 ] 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "nvmf_set_max_subsystems", 00:14:53.633 "params": { 00:14:53.633 "max_subsystems": 1024 00:14:53.633 } 00:14:53.633 }, 00:14:53.633 { 00:14:53.633 "method": "nvmf_set_crdt", 00:14:53.633 "params": { 00:14:53.633 "crdt1": 0, 00:14:53.633 "crdt2": 0, 00:14:53.633 "crdt3": 0 00:14:53.634 } 00:14:53.634 } 00:14:53.634 ] 00:14:53.634 }, 00:14:53.634 { 00:14:53.634 "subsystem": "iscsi", 00:14:53.634 "config": [ 00:14:53.634 { 00:14:53.634 "method": "iscsi_set_options", 00:14:53.634 "params": { 00:14:53.634 "node_base": "iqn.2016-06.io.spdk", 00:14:53.634 "max_sessions": 128, 00:14:53.634 "max_connections_per_session": 2, 00:14:53.634 "max_queue_depth": 64, 00:14:53.634 "default_time2wait": 2, 00:14:53.634 "default_time2retain": 20, 00:14:53.634 "first_burst_length": 8192, 00:14:53.634 "immediate_data": true, 00:14:53.634 "allow_duplicated_isid": false, 00:14:53.634 "error_recovery_level": 0, 00:14:53.634 "nop_timeout": 60, 00:14:53.634 "nop_in_interval": 30, 00:14:53.634 "disable_chap": false, 00:14:53.634 "require_chap": false, 00:14:53.634 "mutual_chap": false, 00:14:53.634 "chap_group": 0, 00:14:53.634 "max_large_datain_per_connection": 64, 00:14:53.634 "max_r2t_per_connection": 4, 00:14:53.634 "pdu_pool_size": 36864, 00:14:53.634 "immediate_data_pool_size": 16384, 00:14:53.634 "data_out_pool_size": 2048 00:14:53.634 } 00:14:53.634 } 00:14:53.634 ] 00:14:53.634 } 00:14:53.634 ] 00:14:53.634 }' 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70799 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70799 ']' 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70799 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70799 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.634 killing process with pid 70799 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70799' 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70799 00:14:53.634 10:14:59 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70799 00:14:54.571 [2024-11-04 10:15:00.232064] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:54.571 [2024-11-04 10:15:00.267827] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:54.571 [2024-11-04 10:15:00.267965] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:54.571 [2024-11-04 10:15:00.277804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:54.571 [2024-11-04 10:15:00.277877] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:54.571 [2024-11-04 10:15:00.277890] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:54.571 [2024-11-04 10:15:00.277914] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:54.571 [2024-11-04 10:15:00.278051] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70848 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70848 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70848 ']' 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:55.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:55.942 10:15:01 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:55.942 "subsystems": [ 00:14:55.942 { 00:14:55.942 "subsystem": "fsdev", 00:14:55.942 "config": [ 00:14:55.942 { 00:14:55.942 "method": "fsdev_set_opts", 00:14:55.942 "params": { 00:14:55.942 "fsdev_io_pool_size": 65535, 00:14:55.942 "fsdev_io_cache_size": 256 00:14:55.942 } 00:14:55.942 } 00:14:55.942 ] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "keyring", 00:14:55.942 "config": [] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "iobuf", 00:14:55.942 "config": [ 00:14:55.942 { 00:14:55.942 "method": "iobuf_set_options", 00:14:55.942 "params": { 00:14:55.942 "small_pool_count": 8192, 00:14:55.942 "large_pool_count": 1024, 00:14:55.942 "small_bufsize": 8192, 00:14:55.942 "large_bufsize": 135168, 00:14:55.942 "enable_numa": false 00:14:55.942 } 00:14:55.942 } 00:14:55.942 ] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "sock", 00:14:55.942 "config": [ 00:14:55.942 { 00:14:55.942 "method": "sock_set_default_impl", 00:14:55.942 "params": { 00:14:55.942 "impl_name": "posix" 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "method": "sock_impl_set_options", 00:14:55.942 "params": { 00:14:55.942 "impl_name": "ssl", 00:14:55.942 "recv_buf_size": 4096, 00:14:55.942 "send_buf_size": 4096, 00:14:55.942 "enable_recv_pipe": true, 00:14:55.942 "enable_quickack": false, 00:14:55.942 "enable_placement_id": 0, 00:14:55.942 "enable_zerocopy_send_server": true, 00:14:55.942 "enable_zerocopy_send_client": false, 00:14:55.942 "zerocopy_threshold": 0, 00:14:55.942 "tls_version": 0, 00:14:55.942 "enable_ktls": false 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "method": "sock_impl_set_options", 00:14:55.942 "params": { 00:14:55.942 "impl_name": "posix", 00:14:55.942 "recv_buf_size": 2097152, 00:14:55.942 "send_buf_size": 2097152, 00:14:55.942 "enable_recv_pipe": true, 00:14:55.942 "enable_quickack": false, 00:14:55.942 "enable_placement_id": 0, 00:14:55.942 "enable_zerocopy_send_server": true, 00:14:55.942 "enable_zerocopy_send_client": false, 00:14:55.942 "zerocopy_threshold": 0, 00:14:55.942 "tls_version": 0, 00:14:55.942 "enable_ktls": false 00:14:55.942 } 00:14:55.942 } 00:14:55.942 ] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "vmd", 00:14:55.942 "config": [] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "accel", 00:14:55.942 "config": [ 00:14:55.942 { 00:14:55.942 "method": "accel_set_options", 00:14:55.942 "params": { 00:14:55.942 "small_cache_size": 128, 00:14:55.942 "large_cache_size": 16, 00:14:55.942 "task_count": 2048, 00:14:55.942 "sequence_count": 2048, 00:14:55.942 "buf_count": 2048 00:14:55.942 } 00:14:55.942 } 00:14:55.942 ] 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "subsystem": "bdev", 00:14:55.942 "config": [ 00:14:55.942 { 00:14:55.942 "method": "bdev_set_options", 00:14:55.942 "params": { 00:14:55.942 "bdev_io_pool_size": 65535, 00:14:55.942 "bdev_io_cache_size": 256, 00:14:55.942 "bdev_auto_examine": true, 00:14:55.942 "iobuf_small_cache_size": 128, 00:14:55.942 "iobuf_large_cache_size": 16 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "method": "bdev_raid_set_options", 00:14:55.942 "params": { 00:14:55.942 "process_window_size_kb": 1024, 00:14:55.942 "process_max_bandwidth_mb_sec": 0 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "method": "bdev_iscsi_set_options", 00:14:55.942 "params": { 00:14:55.942 "timeout_sec": 30 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.942 "method": "bdev_nvme_set_options", 00:14:55.942 "params": { 00:14:55.942 "action_on_timeout": "none", 00:14:55.942 "timeout_us": 0, 00:14:55.942 "timeout_admin_us": 0, 00:14:55.942 "keep_alive_timeout_ms": 10000, 00:14:55.942 "arbitration_burst": 0, 00:14:55.942 "low_priority_weight": 0, 00:14:55.942 "medium_priority_weight": 0, 00:14:55.942 "high_priority_weight": 0, 00:14:55.942 "nvme_adminq_poll_period_us": 10000, 00:14:55.942 "nvme_ioq_poll_period_us": 0, 00:14:55.942 "io_queue_requests": 0, 00:14:55.942 "delay_cmd_submit": true, 00:14:55.942 "transport_retry_count": 4, 00:14:55.942 "bdev_retry_count": 3, 00:14:55.942 "transport_ack_timeout": 0, 00:14:55.942 "ctrlr_loss_timeout_sec": 0, 00:14:55.942 "reconnect_delay_sec": 0, 00:14:55.942 "fast_io_fail_timeout_sec": 0, 00:14:55.942 "disable_auto_failback": false, 00:14:55.942 "generate_uuids": false, 00:14:55.942 "transport_tos": 0, 00:14:55.942 "nvme_error_stat": false, 00:14:55.942 "rdma_srq_size": 0, 00:14:55.942 "io_path_stat": false, 00:14:55.942 "allow_accel_sequence": false, 00:14:55.942 "rdma_max_cq_size": 0, 00:14:55.942 "rdma_cm_event_timeout_ms": 0, 00:14:55.942 "dhchap_digests": [ 00:14:55.942 "sha256", 00:14:55.942 "sha384", 00:14:55.942 "sha512" 00:14:55.942 ], 00:14:55.942 "dhchap_dhgroups": [ 00:14:55.942 "null", 00:14:55.942 "ffdhe2048", 00:14:55.942 "ffdhe3072", 00:14:55.942 "ffdhe4096", 00:14:55.942 "ffdhe6144", 00:14:55.942 "ffdhe8192" 00:14:55.942 ] 00:14:55.942 } 00:14:55.942 }, 00:14:55.942 { 00:14:55.943 "method": "bdev_nvme_set_hotplug", 00:14:55.943 "params": { 00:14:55.943 "period_us": 100000, 00:14:55.943 "enable": false 00:14:55.943 } 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "method": "bdev_malloc_create", 00:14:55.943 "params": { 00:14:55.943 "name": "malloc0", 00:14:55.943 "num_blocks": 8192, 00:14:55.943 "block_size": 4096, 00:14:55.943 "physical_block_size": 4096, 00:14:55.943 "uuid": "74e135a7-19f9-44b4-a4d8-ecf84cb23a54", 00:14:55.943 "optimal_io_boundary": 0, 00:14:55.943 "md_size": 0, 00:14:55.943 "dif_type": 0, 00:14:55.943 "dif_is_head_of_md": false, 00:14:55.943 "dif_pi_format": 0 00:14:55.943 } 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "method": "bdev_wait_for_examine" 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "scsi", 00:14:55.943 "config": null 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "scheduler", 00:14:55.943 "config": [ 00:14:55.943 { 00:14:55.943 "method": "framework_set_scheduler", 00:14:55.943 "params": { 00:14:55.943 "name": "static" 00:14:55.943 } 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "vhost_scsi", 00:14:55.943 "config": [] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "vhost_blk", 00:14:55.943 "config": [] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "ublk", 00:14:55.943 "config": [ 00:14:55.943 { 00:14:55.943 "method": "ublk_create_target", 00:14:55.943 "params": { 00:14:55.943 "cpumask": "1" 00:14:55.943 } 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "method": "ublk_start_disk", 00:14:55.943 "params": { 00:14:55.943 "bdev_name": "malloc0", 00:14:55.943 "ublk_id": 0, 00:14:55.943 "num_queues": 1, 00:14:55.943 "queue_depth": 128 00:14:55.943 } 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "nbd", 00:14:55.943 "config": [] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "nvmf", 00:14:55.943 "config": [ 00:14:55.943 { 00:14:55.943 "method": "nvmf_set_config", 00:14:55.943 "params": { 00:14:55.943 "discovery_filter": "match_any", 00:14:55.943 "admin_cmd_passthru": { 00:14:55.943 "identify_ctrlr": false 00:14:55.943 }, 00:14:55.943 "dhchap_digests": [ 00:14:55.943 "sha256", 00:14:55.943 "sha384", 00:14:55.943 "sha512" 00:14:55.943 ], 00:14:55.943 "dhchap_dhgroups": [ 00:14:55.943 "null", 00:14:55.943 "ffdhe2048", 00:14:55.943 "ffdhe3072", 00:14:55.943 "ffdhe4096", 00:14:55.943 "ffdhe6144", 00:14:55.943 "ffdhe8192" 00:14:55.943 ] 00:14:55.943 } 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "method": "nvmf_set_max_subsystems", 00:14:55.943 "params": { 00:14:55.943 "max_subsystems": 1024 00:14:55.943 } 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "method": "nvmf_set_crdt", 00:14:55.943 "params": { 00:14:55.943 "crdt1": 0, 00:14:55.943 "crdt2": 0, 00:14:55.943 "crdt3": 0 00:14:55.943 } 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 }, 00:14:55.943 { 00:14:55.943 "subsystem": "iscsi", 00:14:55.943 "config": [ 00:14:55.943 { 00:14:55.943 "method": "iscsi_set_options", 00:14:55.943 "params": { 00:14:55.943 "node_base": "iqn.2016-06.io.spdk", 00:14:55.943 "max_sessions": 128, 00:14:55.943 "max_connections_per_session": 2, 00:14:55.943 "max_queue_depth": 64, 00:14:55.943 "default_time2wait": 2, 00:14:55.943 "default_time2retain": 20, 00:14:55.943 "first_burst_length": 8192, 00:14:55.943 "immediate_data": true, 00:14:55.943 "allow_duplicated_isid": false, 00:14:55.943 "error_recovery_level": 0, 00:14:55.943 "nop_timeout": 60, 00:14:55.943 "nop_in_interval": 30, 00:14:55.943 "disable_chap": false, 00:14:55.943 "require_chap": false, 00:14:55.943 "mutual_chap": false, 00:14:55.943 "chap_group": 0, 00:14:55.943 "max_large_datain_per_connection": 64, 00:14:55.943 "max_r2t_per_connection": 4, 00:14:55.943 "pdu_pool_size": 36864, 00:14:55.943 "immediate_data_pool_size": 16384, 00:14:55.943 "data_out_pool_size": 2048 00:14:55.943 } 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 } 00:14:55.943 ] 00:14:55.943 }' 00:14:55.943 10:15:01 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:55.943 [2024-11-04 10:15:01.572437] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:14:55.943 [2024-11-04 10:15:01.572553] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70848 ] 00:14:56.201 [2024-11-04 10:15:01.726465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.201 [2024-11-04 10:15:01.806986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.767 [2024-11-04 10:15:02.440795] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:56.767 [2024-11-04 10:15:02.441419] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.767 [2024-11-04 10:15:02.448882] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:56.767 [2024-11-04 10:15:02.448940] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:56.767 [2024-11-04 10:15:02.448948] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:56.767 [2024-11-04 10:15:02.448953] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:56.767 [2024-11-04 10:15:02.457862] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:56.767 [2024-11-04 10:15:02.457879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:56.767 [2024-11-04 10:15:02.464800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:56.767 [2024-11-04 10:15:02.464869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:56.767 [2024-11-04 10:15:02.481800] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70848 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70848 ']' 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70848 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70848 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:57.026 killing process with pid 70848 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70848' 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70848 00:14:57.026 10:15:02 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70848 00:14:57.960 [2024-11-04 10:15:03.665450] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:57.960 [2024-11-04 10:15:03.701815] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:57.960 [2024-11-04 10:15:03.701914] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:58.218 [2024-11-04 10:15:03.709807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:58.218 [2024-11-04 10:15:03.709850] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:58.218 [2024-11-04 10:15:03.709856] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:58.218 [2024-11-04 10:15:03.709876] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:58.218 [2024-11-04 10:15:03.709985] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:59.151 10:15:04 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:59.151 00:14:59.151 real 0m7.021s 00:14:59.151 user 0m4.798s 00:14:59.151 sys 0m2.821s 00:14:59.151 10:15:04 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:59.151 10:15:04 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:59.151 ************************************ 00:14:59.151 END TEST test_save_ublk_config 00:14:59.152 ************************************ 00:14:59.426 10:15:04 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70921 00:14:59.426 10:15:04 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:59.426 10:15:04 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:59.426 10:15:04 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70921 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@833 -- # '[' -z 70921 ']' 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:59.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:59.426 10:15:04 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:59.426 [2024-11-04 10:15:04.976311] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:14:59.426 [2024-11-04 10:15:04.976430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70921 ] 00:14:59.426 [2024-11-04 10:15:05.130887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:59.694 [2024-11-04 10:15:05.211069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:59.694 [2024-11-04 10:15:05.211254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:00.259 10:15:05 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:00.259 10:15:05 ublk -- common/autotest_common.sh@866 -- # return 0 00:15:00.259 10:15:05 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:00.259 10:15:05 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:00.259 10:15:05 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:00.259 10:15:05 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.259 ************************************ 00:15:00.259 START TEST test_create_ublk 00:15:00.259 ************************************ 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:15:00.259 10:15:05 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.259 [2024-11-04 10:15:05.828803] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:00.259 [2024-11-04 10:15:05.830359] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.259 10:15:05 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:00.259 10:15:05 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.259 10:15:05 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:00.259 10:15:05 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.259 10:15:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.259 [2024-11-04 10:15:05.979907] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:00.259 [2024-11-04 10:15:05.980202] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:00.259 [2024-11-04 10:15:05.980210] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:00.259 [2024-11-04 10:15:05.980216] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:00.259 [2024-11-04 10:15:05.987823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:00.259 [2024-11-04 10:15:05.987841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:00.259 [2024-11-04 10:15:05.995818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:00.517 [2024-11-04 10:15:06.003841] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:00.517 [2024-11-04 10:15:06.025815] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:00.517 10:15:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:00.517 10:15:06 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.517 10:15:06 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:00.517 10:15:06 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:00.517 { 00:15:00.517 "ublk_device": "/dev/ublkb0", 00:15:00.517 "id": 0, 00:15:00.517 "queue_depth": 512, 00:15:00.517 "num_queues": 4, 00:15:00.517 "bdev_name": "Malloc0" 00:15:00.517 } 00:15:00.517 ]' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:00.517 10:15:06 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:00.774 fio: verification read phase will never start because write phase uses all of runtime 00:15:00.774 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:00.774 fio-3.35 00:15:00.774 Starting 1 process 00:15:10.736 00:15:10.736 fio_test: (groupid=0, jobs=1): err= 0: pid=70964: Mon Nov 4 10:15:16 2024 00:15:10.736 write: IOPS=18.7k, BW=72.9MiB/s (76.5MB/s)(729MiB/10001msec); 0 zone resets 00:15:10.736 clat (usec): min=37, max=7974, avg=52.77, stdev=113.13 00:15:10.736 lat (usec): min=37, max=7987, avg=53.22, stdev=113.15 00:15:10.736 clat percentiles (usec): 00:15:10.736 | 1.00th=[ 40], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 44], 00:15:10.736 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 48], 60.00th=[ 49], 00:15:10.736 | 70.00th=[ 50], 80.00th=[ 52], 90.00th=[ 55], 95.00th=[ 60], 00:15:10.736 | 99.00th=[ 70], 99.50th=[ 76], 99.90th=[ 2409], 99.95th=[ 3294], 00:15:10.736 | 99.99th=[ 3785] 00:15:10.736 bw ( KiB/s): min=33504, max=81168, per=100.00%, avg=74827.79, stdev=10470.08, samples=19 00:15:10.736 iops : min= 8376, max=20292, avg=18706.95, stdev=2617.52, samples=19 00:15:10.736 lat (usec) : 50=69.96%, 100=29.72%, 250=0.11%, 500=0.03%, 750=0.01% 00:15:10.736 lat (usec) : 1000=0.01% 00:15:10.736 lat (msec) : 2=0.04%, 4=0.11%, 10=0.01% 00:15:10.736 cpu : usr=3.24%, sys=14.27%, ctx=186702, majf=0, minf=797 00:15:10.736 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:10.736 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.736 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:10.736 issued rwts: total=0,186703,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:10.736 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:10.736 00:15:10.736 Run status group 0 (all jobs): 00:15:10.736 WRITE: bw=72.9MiB/s (76.5MB/s), 72.9MiB/s-72.9MiB/s (76.5MB/s-76.5MB/s), io=729MiB (765MB), run=10001-10001msec 00:15:10.736 00:15:10.736 Disk stats (read/write): 00:15:10.736 ublkb0: ios=0/184762, merge=0/0, ticks=0/8276, in_queue=8277, util=99.09% 00:15:10.736 10:15:16 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:10.736 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.736 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.736 [2024-11-04 10:15:16.443560] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:10.736 [2024-11-04 10:15:16.473256] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:10.736 [2024-11-04 10:15:16.474140] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:10.994 [2024-11-04 10:15:16.480807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:10.994 [2024-11-04 10:15:16.481041] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:10.994 [2024-11-04 10:15:16.481054] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.994 10:15:16 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.994 [2024-11-04 10:15:16.495885] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:10.994 request: 00:15:10.994 { 00:15:10.994 "ublk_id": 0, 00:15:10.994 "method": "ublk_stop_disk", 00:15:10.994 "req_id": 1 00:15:10.994 } 00:15:10.994 Got JSON-RPC error response 00:15:10.994 response: 00:15:10.994 { 00:15:10.994 "code": -19, 00:15:10.994 "message": "No such device" 00:15:10.994 } 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.994 10:15:16 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:10.994 [2024-11-04 10:15:16.512860] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:10.994 [2024-11-04 10:15:16.516460] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:10.994 [2024-11-04 10:15:16.516493] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.994 10:15:16 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.994 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.252 10:15:16 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:11.252 ************************************ 00:15:11.252 END TEST test_create_ublk 00:15:11.252 ************************************ 00:15:11.252 10:15:16 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:11.252 00:15:11.252 real 0m11.156s 00:15:11.252 user 0m0.623s 00:15:11.252 sys 0m1.510s 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:11.252 10:15:16 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.510 10:15:17 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:11.510 10:15:17 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:11.510 10:15:17 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:11.510 10:15:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.510 ************************************ 00:15:11.510 START TEST test_create_multi_ublk 00:15:11.510 ************************************ 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.510 [2024-11-04 10:15:17.019795] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:11.510 [2024-11-04 10:15:17.021314] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.510 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.510 [2024-11-04 10:15:17.223916] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:11.510 [2024-11-04 10:15:17.224209] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:11.510 [2024-11-04 10:15:17.224221] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:11.510 [2024-11-04 10:15:17.224229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:11.510 [2024-11-04 10:15:17.247803] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:11.510 [2024-11-04 10:15:17.247854] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:11.767 [2024-11-04 10:15:17.259797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:11.767 [2024-11-04 10:15:17.260305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:11.767 [2024-11-04 10:15:17.288807] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:11.767 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.767 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:11.767 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.768 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:11.768 [2024-11-04 10:15:17.509892] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:11.768 [2024-11-04 10:15:17.510181] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:11.768 [2024-11-04 10:15:17.510196] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:11.768 [2024-11-04 10:15:17.510200] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:12.025 [2024-11-04 10:15:17.517811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:12.025 [2024-11-04 10:15:17.517827] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:12.025 [2024-11-04 10:15:17.525805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:12.025 [2024-11-04 10:15:17.526289] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:12.025 [2024-11-04 10:15:17.530512] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.025 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:12.026 [2024-11-04 10:15:17.689881] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:12.026 [2024-11-04 10:15:17.690175] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:12.026 [2024-11-04 10:15:17.690186] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:12.026 [2024-11-04 10:15:17.690193] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:12.026 [2024-11-04 10:15:17.698959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:12.026 [2024-11-04 10:15:17.698978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:12.026 [2024-11-04 10:15:17.705802] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:12.026 [2024-11-04 10:15:17.706301] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:12.026 [2024-11-04 10:15:17.714829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.026 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 [2024-11-04 10:15:17.873896] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:12.283 [2024-11-04 10:15:17.874184] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:12.283 [2024-11-04 10:15:17.874198] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:12.283 [2024-11-04 10:15:17.874203] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:12.283 [2024-11-04 10:15:17.881814] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:12.283 [2024-11-04 10:15:17.881832] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:12.283 [2024-11-04 10:15:17.889804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:12.283 [2024-11-04 10:15:17.890288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:12.283 [2024-11-04 10:15:17.904838] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.283 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:12.283 { 00:15:12.283 "ublk_device": "/dev/ublkb0", 00:15:12.283 "id": 0, 00:15:12.283 "queue_depth": 512, 00:15:12.284 "num_queues": 4, 00:15:12.284 "bdev_name": "Malloc0" 00:15:12.284 }, 00:15:12.284 { 00:15:12.284 "ublk_device": "/dev/ublkb1", 00:15:12.284 "id": 1, 00:15:12.284 "queue_depth": 512, 00:15:12.284 "num_queues": 4, 00:15:12.284 "bdev_name": "Malloc1" 00:15:12.284 }, 00:15:12.284 { 00:15:12.284 "ublk_device": "/dev/ublkb2", 00:15:12.284 "id": 2, 00:15:12.284 "queue_depth": 512, 00:15:12.284 "num_queues": 4, 00:15:12.284 "bdev_name": "Malloc2" 00:15:12.284 }, 00:15:12.284 { 00:15:12.284 "ublk_device": "/dev/ublkb3", 00:15:12.284 "id": 3, 00:15:12.284 "queue_depth": 512, 00:15:12.284 "num_queues": 4, 00:15:12.284 "bdev_name": "Malloc3" 00:15:12.284 } 00:15:12.284 ]' 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:12.284 10:15:17 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:12.284 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:12.284 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:12.542 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:12.801 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:13.059 [2024-11-04 10:15:18.560887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.059 [2024-11-04 10:15:18.593278] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.059 [2024-11-04 10:15:18.594293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.059 [2024-11-04 10:15:18.603813] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.059 [2024-11-04 10:15:18.604049] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:13.059 [2024-11-04 10:15:18.604074] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:13.059 [2024-11-04 10:15:18.616878] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.059 [2024-11-04 10:15:18.651805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.059 [2024-11-04 10:15:18.652480] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.059 [2024-11-04 10:15:18.659826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.059 [2024-11-04 10:15:18.660047] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:13.059 [2024-11-04 10:15:18.660059] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:13.059 [2024-11-04 10:15:18.675869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.059 [2024-11-04 10:15:18.709797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.059 [2024-11-04 10:15:18.710444] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.059 [2024-11-04 10:15:18.715806] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.059 [2024-11-04 10:15:18.716041] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:13.059 [2024-11-04 10:15:18.716050] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:13.059 [2024-11-04 10:15:18.726868] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:13.059 [2024-11-04 10:15:18.755251] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:13.059 [2024-11-04 10:15:18.756131] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:13.059 [2024-11-04 10:15:18.766804] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:13.059 [2024-11-04 10:15:18.767057] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:13.059 [2024-11-04 10:15:18.767070] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.059 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:13.317 [2024-11-04 10:15:18.958864] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:13.317 [2024-11-04 10:15:18.962517] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:13.317 [2024-11-04 10:15:18.962549] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:13.317 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:13.317 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.317 10:15:18 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:13.317 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.317 10:15:18 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:13.881 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.881 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:13.881 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:13.881 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.881 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.138 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:14.138 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:14.138 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.138 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.395 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.395 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:14.395 10:15:19 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:14.395 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.395 10:15:19 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:14.395 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:14.652 ************************************ 00:15:14.652 END TEST test_create_multi_ublk 00:15:14.652 ************************************ 00:15:14.652 10:15:20 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:14.652 00:15:14.652 real 0m3.172s 00:15:14.652 user 0m0.841s 00:15:14.652 sys 0m0.134s 00:15:14.653 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:14.653 10:15:20 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:14.653 10:15:20 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:14.653 10:15:20 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:14.653 10:15:20 ublk -- ublk/ublk.sh@130 -- # killprocess 70921 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@952 -- # '[' -z 70921 ']' 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@956 -- # kill -0 70921 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@957 -- # uname 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70921 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:14.653 killing process with pid 70921 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70921' 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@971 -- # kill 70921 00:15:14.653 10:15:20 ublk -- common/autotest_common.sh@976 -- # wait 70921 00:15:15.217 [2024-11-04 10:15:20.777998] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:15.217 [2024-11-04 10:15:20.778046] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:15.781 ************************************ 00:15:15.782 END TEST ublk 00:15:15.782 ************************************ 00:15:15.782 00:15:15.782 real 0m23.705s 00:15:15.782 user 0m34.546s 00:15:15.782 sys 0m9.141s 00:15:15.782 10:15:21 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:15.782 10:15:21 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 10:15:21 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:15.782 10:15:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:15.782 10:15:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:15.782 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:15:15.782 ************************************ 00:15:15.782 START TEST ublk_recovery 00:15:15.782 ************************************ 00:15:15.782 10:15:21 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:15.782 * Looking for test storage... 00:15:15.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:15.782 10:15:21 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:15.782 10:15:21 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:15.782 10:15:21 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:16.039 10:15:21 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.039 --rc genhtml_branch_coverage=1 00:15:16.039 --rc genhtml_function_coverage=1 00:15:16.039 --rc genhtml_legend=1 00:15:16.039 --rc geninfo_all_blocks=1 00:15:16.039 --rc geninfo_unexecuted_blocks=1 00:15:16.039 00:15:16.039 ' 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.039 --rc genhtml_branch_coverage=1 00:15:16.039 --rc genhtml_function_coverage=1 00:15:16.039 --rc genhtml_legend=1 00:15:16.039 --rc geninfo_all_blocks=1 00:15:16.039 --rc geninfo_unexecuted_blocks=1 00:15:16.039 00:15:16.039 ' 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.039 --rc genhtml_branch_coverage=1 00:15:16.039 --rc genhtml_function_coverage=1 00:15:16.039 --rc genhtml_legend=1 00:15:16.039 --rc geninfo_all_blocks=1 00:15:16.039 --rc geninfo_unexecuted_blocks=1 00:15:16.039 00:15:16.039 ' 00:15:16.039 10:15:21 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:16.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:16.039 --rc genhtml_branch_coverage=1 00:15:16.039 --rc genhtml_function_coverage=1 00:15:16.039 --rc genhtml_legend=1 00:15:16.039 --rc geninfo_all_blocks=1 00:15:16.039 --rc geninfo_unexecuted_blocks=1 00:15:16.039 00:15:16.039 ' 00:15:16.039 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:16.039 10:15:21 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:16.040 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:16.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.040 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71311 00:15:16.040 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:16.040 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71311 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71311 ']' 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:16.040 10:15:21 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:16.040 10:15:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.040 [2024-11-04 10:15:21.661735] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:15:16.040 [2024-11-04 10:15:21.661873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71311 ] 00:15:16.297 [2024-11-04 10:15:21.818518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:16.297 [2024-11-04 10:15:21.894264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.297 [2024-11-04 10:15:21.894335] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:15:16.861 10:15:22 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.861 [2024-11-04 10:15:22.495805] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:16.861 [2024-11-04 10:15:22.497375] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.861 10:15:22 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.861 malloc0 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.861 10:15:22 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.861 10:15:22 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:16.861 [2024-11-04 10:15:22.575976] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:16.861 [2024-11-04 10:15:22.576056] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:16.861 [2024-11-04 10:15:22.576065] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:16.861 [2024-11-04 10:15:22.576072] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:16.862 [2024-11-04 10:15:22.584868] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:16.862 [2024-11-04 10:15:22.584887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:16.862 [2024-11-04 10:15:22.591805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:16.862 [2024-11-04 10:15:22.591922] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:17.118 [2024-11-04 10:15:22.614805] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:17.118 1 00:15:17.118 10:15:22 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.118 10:15:22 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:18.051 10:15:23 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71340 00:15:18.051 10:15:23 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:18.051 10:15:23 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:18.051 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:18.051 fio-3.35 00:15:18.051 Starting 1 process 00:15:23.317 10:15:28 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71311 00:15:23.317 10:15:28 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:28.620 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71311 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:28.620 10:15:33 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71455 00:15:28.620 10:15:33 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:28.620 10:15:33 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71455 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71455 ']' 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.620 10:15:33 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:28.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.620 10:15:33 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.620 [2024-11-04 10:15:33.712106] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:15:28.620 [2024-11-04 10:15:33.712522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:15:28.620 [2024-11-04 10:15:33.866689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:28.620 [2024-11-04 10:15:33.948204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.620 [2024-11-04 10:15:33.948214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:15:28.879 10:15:34 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:28.879 [2024-11-04 10:15:34.541801] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:28.879 [2024-11-04 10:15:34.543318] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.879 10:15:34 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.879 10:15:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.139 malloc0 00:15:29.139 10:15:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.139 10:15:34 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:29.139 10:15:34 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.139 10:15:34 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:29.139 [2024-11-04 10:15:34.625482] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:29.139 [2024-11-04 10:15:34.625514] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:29.139 [2024-11-04 10:15:34.625521] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:29.139 [2024-11-04 10:15:34.629797] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:29.139 [2024-11-04 10:15:34.629811] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:29.139 1 00:15:29.139 10:15:34 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.139 10:15:34 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71340 00:15:30.070 [2024-11-04 10:15:35.629843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:30.070 [2024-11-04 10:15:35.631801] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:30.070 [2024-11-04 10:15:35.631809] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:31.004 [2024-11-04 10:15:36.631835] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:31.004 [2024-11-04 10:15:36.635799] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:31.004 [2024-11-04 10:15:36.635817] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:31.940 [2024-11-04 10:15:37.635843] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:31.940 [2024-11-04 10:15:37.636818] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:31.940 [2024-11-04 10:15:37.636829] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:15:31.940 [2024-11-04 10:15:37.636837] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:31.940 [2024-11-04 10:15:37.636909] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:53.923 [2024-11-04 10:15:58.931811] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:53.923 [2024-11-04 10:15:58.935156] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:53.923 [2024-11-04 10:15:58.938976] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:53.923 [2024-11-04 10:15:58.938993] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:20.503 00:16:20.503 fio_test: (groupid=0, jobs=1): err= 0: pid=71347: Mon Nov 4 10:16:23 2024 00:16:20.503 read: IOPS=14.8k, BW=57.7MiB/s (60.5MB/s)(3460MiB/60002msec) 00:16:20.503 slat (nsec): min=1049, max=130318, avg=4848.31, stdev=1377.57 00:16:20.503 clat (usec): min=650, max=30320k, avg=4534.31, stdev=271427.13 00:16:20.503 lat (usec): min=660, max=30320k, avg=4539.16, stdev=271427.13 00:16:20.503 clat percentiles (usec): 00:16:20.503 | 1.00th=[ 1647], 5.00th=[ 1762], 10.00th=[ 1795], 20.00th=[ 1827], 00:16:20.503 | 30.00th=[ 1844], 40.00th=[ 1860], 50.00th=[ 1860], 60.00th=[ 1876], 00:16:20.503 | 70.00th=[ 1893], 80.00th=[ 1926], 90.00th=[ 2212], 95.00th=[ 4293], 00:16:20.503 | 99.00th=[ 5932], 99.50th=[ 6521], 99.90th=[12125], 99.95th=[12649], 00:16:20.503 | 99.99th=[13042] 00:16:20.503 bw ( KiB/s): min=51688, max=130664, per=100.00%, avg=118053.83, stdev=23656.79, samples=59 00:16:20.503 iops : min=12922, max=32666, avg=29513.49, stdev=5914.21, samples=59 00:16:20.503 write: IOPS=14.7k, BW=57.6MiB/s (60.4MB/s)(3455MiB/60002msec); 0 zone resets 00:16:20.503 slat (nsec): min=1079, max=334940, avg=4875.34, stdev=1465.68 00:16:20.503 clat (usec): min=624, max=30320k, avg=4132.32, stdev=243385.51 00:16:20.503 lat (usec): min=629, max=30320k, avg=4137.20, stdev=243385.51 00:16:20.503 clat percentiles (usec): 00:16:20.503 | 1.00th=[ 1680], 5.00th=[ 1844], 10.00th=[ 1876], 20.00th=[ 1909], 00:16:20.503 | 30.00th=[ 1926], 40.00th=[ 1942], 50.00th=[ 1958], 60.00th=[ 1975], 00:16:20.503 | 70.00th=[ 1991], 80.00th=[ 2008], 90.00th=[ 2180], 95.00th=[ 4228], 00:16:20.503 | 99.00th=[ 5932], 99.50th=[ 6587], 99.90th=[12256], 99.95th=[12649], 00:16:20.503 | 99.99th=[13173] 00:16:20.503 bw ( KiB/s): min=52624, max=131344, per=100.00%, avg=117885.83, stdev=23500.12, samples=59 00:16:20.503 iops : min=13156, max=32836, avg=29471.46, stdev=5875.03, samples=59 00:16:20.503 lat (usec) : 750=0.01%, 1000=0.01% 00:16:20.503 lat (msec) : 2=81.99%, 4=12.65%, 10=5.26%, 20=0.10%, >=2000=0.01% 00:16:20.503 cpu : usr=3.35%, sys=14.86%, ctx=59974, majf=0, minf=13 00:16:20.503 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:20.503 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:20.503 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:20.503 issued rwts: total=885717,884376,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:20.503 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:20.503 00:16:20.503 Run status group 0 (all jobs): 00:16:20.503 READ: bw=57.7MiB/s (60.5MB/s), 57.7MiB/s-57.7MiB/s (60.5MB/s-60.5MB/s), io=3460MiB (3628MB), run=60002-60002msec 00:16:20.503 WRITE: bw=57.6MiB/s (60.4MB/s), 57.6MiB/s-57.6MiB/s (60.4MB/s-60.4MB/s), io=3455MiB (3622MB), run=60002-60002msec 00:16:20.503 00:16:20.503 Disk stats (read/write): 00:16:20.503 ublkb1: ios=882248/880944, merge=0/0, ticks=3962912/3531252, in_queue=7494165, util=99.89% 00:16:20.503 10:16:23 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 [2024-11-04 10:16:23.884625] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.503 [2024-11-04 10:16:23.918820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.503 [2024-11-04 10:16:23.918951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.503 [2024-11-04 10:16:23.923100] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.503 [2024-11-04 10:16:23.923199] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:20.503 [2024-11-04 10:16:23.926795] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.503 10:16:23 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 [2024-11-04 10:16:23.931194] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.503 [2024-11-04 10:16:23.938336] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.503 [2024-11-04 10:16:23.941803] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.503 10:16:23 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:20.503 10:16:23 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:20.503 10:16:23 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71455 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71455 ']' 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71455 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71455 00:16:20.503 killing process with pid 71455 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71455' 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71455 00:16:20.503 10:16:23 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71455 00:16:20.503 [2024-11-04 10:16:25.006709] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.503 [2024-11-04 10:16:25.006752] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:20.503 ************************************ 00:16:20.503 END TEST ublk_recovery 00:16:20.503 ************************************ 00:16:20.503 00:16:20.503 real 1m4.246s 00:16:20.503 user 1m45.368s 00:16:20.503 sys 0m23.493s 00:16:20.503 10:16:25 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:20.503 10:16:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 10:16:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:20.503 10:16:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:20.503 10:16:25 -- common/autotest_common.sh@10 -- # set +x 00:16:20.503 10:16:25 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:20.503 10:16:25 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.504 10:16:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:20.504 10:16:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:20.504 10:16:25 -- common/autotest_common.sh@10 -- # set +x 00:16:20.504 ************************************ 00:16:20.504 START TEST ftl 00:16:20.504 ************************************ 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.504 * Looking for test storage... 00:16:20.504 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:20.504 10:16:25 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:20.504 10:16:25 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:20.504 10:16:25 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:20.504 10:16:25 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:20.504 10:16:25 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:20.504 10:16:25 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:20.504 10:16:25 ftl -- scripts/common.sh@345 -- # : 1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:20.504 10:16:25 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:20.504 10:16:25 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@353 -- # local d=1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:20.504 10:16:25 ftl -- scripts/common.sh@355 -- # echo 1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:20.504 10:16:25 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@353 -- # local d=2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:20.504 10:16:25 ftl -- scripts/common.sh@355 -- # echo 2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:20.504 10:16:25 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:20.504 10:16:25 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:20.504 10:16:25 ftl -- scripts/common.sh@368 -- # return 0 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:20.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.504 --rc genhtml_branch_coverage=1 00:16:20.504 --rc genhtml_function_coverage=1 00:16:20.504 --rc genhtml_legend=1 00:16:20.504 --rc geninfo_all_blocks=1 00:16:20.504 --rc geninfo_unexecuted_blocks=1 00:16:20.504 00:16:20.504 ' 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:20.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.504 --rc genhtml_branch_coverage=1 00:16:20.504 --rc genhtml_function_coverage=1 00:16:20.504 --rc genhtml_legend=1 00:16:20.504 --rc geninfo_all_blocks=1 00:16:20.504 --rc geninfo_unexecuted_blocks=1 00:16:20.504 00:16:20.504 ' 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:20.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.504 --rc genhtml_branch_coverage=1 00:16:20.504 --rc genhtml_function_coverage=1 00:16:20.504 --rc genhtml_legend=1 00:16:20.504 --rc geninfo_all_blocks=1 00:16:20.504 --rc geninfo_unexecuted_blocks=1 00:16:20.504 00:16:20.504 ' 00:16:20.504 10:16:25 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:20.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:20.504 --rc genhtml_branch_coverage=1 00:16:20.504 --rc genhtml_function_coverage=1 00:16:20.504 --rc genhtml_legend=1 00:16:20.504 --rc geninfo_all_blocks=1 00:16:20.504 --rc geninfo_unexecuted_blocks=1 00:16:20.504 00:16:20.504 ' 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:20.504 10:16:25 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:20.504 10:16:25 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.504 10:16:25 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:20.504 10:16:25 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:20.504 10:16:25 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:20.504 10:16:25 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.504 10:16:25 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.504 10:16:25 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.504 10:16:25 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.504 10:16:25 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:20.504 10:16:25 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:20.504 10:16:25 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:20.504 10:16:25 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.504 10:16:25 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:20.504 10:16:25 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:20.504 10:16:25 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.504 10:16:25 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:20.504 10:16:25 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.504 10:16:25 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:20.504 10:16:25 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:20.504 10:16:25 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:20.504 10:16:25 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.504 10:16:25 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:20.504 10:16:25 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:20.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:20.762 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.762 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.762 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.762 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:20.762 10:16:26 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72259 00:16:20.762 10:16:26 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:20.762 10:16:26 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72259 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@833 -- # '[' -z 72259 ']' 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:20.762 10:16:26 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:20.762 [2024-11-04 10:16:26.466211] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:16:20.762 [2024-11-04 10:16:26.466487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72259 ] 00:16:21.019 [2024-11-04 10:16:26.626993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:21.019 [2024-11-04 10:16:26.727046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:21.585 10:16:27 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:21.585 10:16:27 ftl -- common/autotest_common.sh@866 -- # return 0 00:16:21.585 10:16:27 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:21.847 10:16:27 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:22.413 10:16:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:22.413 10:16:28 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:22.978 10:16:28 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:22.978 10:16:28 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:22.978 10:16:28 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@50 -- # break 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:23.237 10:16:28 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:23.495 10:16:29 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:23.495 10:16:29 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:23.495 10:16:29 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:23.495 10:16:29 ftl -- ftl/ftl.sh@63 -- # break 00:16:23.495 10:16:29 ftl -- ftl/ftl.sh@66 -- # killprocess 72259 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@952 -- # '[' -z 72259 ']' 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@956 -- # kill -0 72259 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@957 -- # uname 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72259 00:16:23.495 killing process with pid 72259 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72259' 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@971 -- # kill 72259 00:16:23.495 10:16:29 ftl -- common/autotest_common.sh@976 -- # wait 72259 00:16:24.868 10:16:30 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:24.868 10:16:30 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.868 10:16:30 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:24.868 10:16:30 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:24.868 10:16:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:24.868 ************************************ 00:16:24.868 START TEST ftl_fio_basic 00:16:24.868 ************************************ 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:24.868 * Looking for test storage... 00:16:24.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:24.868 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.869 --rc genhtml_branch_coverage=1 00:16:24.869 --rc genhtml_function_coverage=1 00:16:24.869 --rc genhtml_legend=1 00:16:24.869 --rc geninfo_all_blocks=1 00:16:24.869 --rc geninfo_unexecuted_blocks=1 00:16:24.869 00:16:24.869 ' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.869 --rc genhtml_branch_coverage=1 00:16:24.869 --rc genhtml_function_coverage=1 00:16:24.869 --rc genhtml_legend=1 00:16:24.869 --rc geninfo_all_blocks=1 00:16:24.869 --rc geninfo_unexecuted_blocks=1 00:16:24.869 00:16:24.869 ' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.869 --rc genhtml_branch_coverage=1 00:16:24.869 --rc genhtml_function_coverage=1 00:16:24.869 --rc genhtml_legend=1 00:16:24.869 --rc geninfo_all_blocks=1 00:16:24.869 --rc geninfo_unexecuted_blocks=1 00:16:24.869 00:16:24.869 ' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:24.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:24.869 --rc genhtml_branch_coverage=1 00:16:24.869 --rc genhtml_function_coverage=1 00:16:24.869 --rc genhtml_legend=1 00:16:24.869 --rc geninfo_all_blocks=1 00:16:24.869 --rc geninfo_unexecuted_blocks=1 00:16:24.869 00:16:24.869 ' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72385 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72385 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72385 ']' 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:24.869 10:16:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:24.869 [2024-11-04 10:16:30.433909] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:16:24.869 [2024-11-04 10:16:30.434149] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72385 ] 00:16:24.869 [2024-11-04 10:16:30.590106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:25.126 [2024-11-04 10:16:30.673527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:25.126 [2024-11-04 10:16:30.673900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.126 [2024-11-04 10:16:30.673928] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:25.691 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:25.948 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:26.206 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:26.206 { 00:16:26.206 "name": "nvme0n1", 00:16:26.206 "aliases": [ 00:16:26.206 "3c1dcd82-ee6f-4f1c-ad8b-f07f724a5c67" 00:16:26.206 ], 00:16:26.206 "product_name": "NVMe disk", 00:16:26.206 "block_size": 4096, 00:16:26.206 "num_blocks": 1310720, 00:16:26.206 "uuid": "3c1dcd82-ee6f-4f1c-ad8b-f07f724a5c67", 00:16:26.206 "numa_id": -1, 00:16:26.206 "assigned_rate_limits": { 00:16:26.206 "rw_ios_per_sec": 0, 00:16:26.206 "rw_mbytes_per_sec": 0, 00:16:26.206 "r_mbytes_per_sec": 0, 00:16:26.206 "w_mbytes_per_sec": 0 00:16:26.206 }, 00:16:26.206 "claimed": false, 00:16:26.206 "zoned": false, 00:16:26.206 "supported_io_types": { 00:16:26.206 "read": true, 00:16:26.206 "write": true, 00:16:26.206 "unmap": true, 00:16:26.206 "flush": true, 00:16:26.206 "reset": true, 00:16:26.206 "nvme_admin": true, 00:16:26.206 "nvme_io": true, 00:16:26.206 "nvme_io_md": false, 00:16:26.206 "write_zeroes": true, 00:16:26.206 "zcopy": false, 00:16:26.206 "get_zone_info": false, 00:16:26.206 "zone_management": false, 00:16:26.206 "zone_append": false, 00:16:26.206 "compare": true, 00:16:26.206 "compare_and_write": false, 00:16:26.206 "abort": true, 00:16:26.206 "seek_hole": false, 00:16:26.206 "seek_data": false, 00:16:26.206 "copy": true, 00:16:26.206 "nvme_iov_md": false 00:16:26.206 }, 00:16:26.206 "driver_specific": { 00:16:26.206 "nvme": [ 00:16:26.206 { 00:16:26.206 "pci_address": "0000:00:11.0", 00:16:26.206 "trid": { 00:16:26.206 "trtype": "PCIe", 00:16:26.206 "traddr": "0000:00:11.0" 00:16:26.206 }, 00:16:26.206 "ctrlr_data": { 00:16:26.206 "cntlid": 0, 00:16:26.206 "vendor_id": "0x1b36", 00:16:26.206 "model_number": "QEMU NVMe Ctrl", 00:16:26.206 "serial_number": "12341", 00:16:26.206 "firmware_revision": "8.0.0", 00:16:26.206 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:26.206 "oacs": { 00:16:26.206 "security": 0, 00:16:26.206 "format": 1, 00:16:26.206 "firmware": 0, 00:16:26.206 "ns_manage": 1 00:16:26.207 }, 00:16:26.207 "multi_ctrlr": false, 00:16:26.207 "ana_reporting": false 00:16:26.207 }, 00:16:26.207 "vs": { 00:16:26.207 "nvme_version": "1.4" 00:16:26.207 }, 00:16:26.207 "ns_data": { 00:16:26.207 "id": 1, 00:16:26.207 "can_share": false 00:16:26.207 } 00:16:26.207 } 00:16:26.207 ], 00:16:26.207 "mp_policy": "active_passive" 00:16:26.207 } 00:16:26.207 } 00:16:26.207 ]' 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:26.207 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:26.464 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:26.464 10:16:31 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:26.464 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=44ad26f6-aeca-455f-b275-44bb4ca0428f 00:16:26.464 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 44ad26f6-aeca-455f-b275-44bb4ca0428f 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:26.722 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:26.987 { 00:16:26.987 "name": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:26.987 "aliases": [ 00:16:26.987 "lvs/nvme0n1p0" 00:16:26.987 ], 00:16:26.987 "product_name": "Logical Volume", 00:16:26.987 "block_size": 4096, 00:16:26.987 "num_blocks": 26476544, 00:16:26.987 "uuid": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:26.987 "assigned_rate_limits": { 00:16:26.987 "rw_ios_per_sec": 0, 00:16:26.987 "rw_mbytes_per_sec": 0, 00:16:26.987 "r_mbytes_per_sec": 0, 00:16:26.987 "w_mbytes_per_sec": 0 00:16:26.987 }, 00:16:26.987 "claimed": false, 00:16:26.987 "zoned": false, 00:16:26.987 "supported_io_types": { 00:16:26.987 "read": true, 00:16:26.987 "write": true, 00:16:26.987 "unmap": true, 00:16:26.987 "flush": false, 00:16:26.987 "reset": true, 00:16:26.987 "nvme_admin": false, 00:16:26.987 "nvme_io": false, 00:16:26.987 "nvme_io_md": false, 00:16:26.987 "write_zeroes": true, 00:16:26.987 "zcopy": false, 00:16:26.987 "get_zone_info": false, 00:16:26.987 "zone_management": false, 00:16:26.987 "zone_append": false, 00:16:26.987 "compare": false, 00:16:26.987 "compare_and_write": false, 00:16:26.987 "abort": false, 00:16:26.987 "seek_hole": true, 00:16:26.987 "seek_data": true, 00:16:26.987 "copy": false, 00:16:26.987 "nvme_iov_md": false 00:16:26.987 }, 00:16:26.987 "driver_specific": { 00:16:26.987 "lvol": { 00:16:26.987 "lvol_store_uuid": "44ad26f6-aeca-455f-b275-44bb4ca0428f", 00:16:26.987 "base_bdev": "nvme0n1", 00:16:26.987 "thin_provision": true, 00:16:26.987 "num_allocated_clusters": 0, 00:16:26.987 "snapshot": false, 00:16:26.987 "clone": false, 00:16:26.987 "esnap_clone": false 00:16:26.987 } 00:16:26.987 } 00:16:26.987 } 00:16:26.987 ]' 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:26.987 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:27.245 10:16:32 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:27.502 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:27.502 { 00:16:27.502 "name": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:27.502 "aliases": [ 00:16:27.502 "lvs/nvme0n1p0" 00:16:27.502 ], 00:16:27.502 "product_name": "Logical Volume", 00:16:27.502 "block_size": 4096, 00:16:27.502 "num_blocks": 26476544, 00:16:27.502 "uuid": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:27.502 "assigned_rate_limits": { 00:16:27.502 "rw_ios_per_sec": 0, 00:16:27.502 "rw_mbytes_per_sec": 0, 00:16:27.502 "r_mbytes_per_sec": 0, 00:16:27.502 "w_mbytes_per_sec": 0 00:16:27.502 }, 00:16:27.502 "claimed": false, 00:16:27.502 "zoned": false, 00:16:27.502 "supported_io_types": { 00:16:27.502 "read": true, 00:16:27.502 "write": true, 00:16:27.503 "unmap": true, 00:16:27.503 "flush": false, 00:16:27.503 "reset": true, 00:16:27.503 "nvme_admin": false, 00:16:27.503 "nvme_io": false, 00:16:27.503 "nvme_io_md": false, 00:16:27.503 "write_zeroes": true, 00:16:27.503 "zcopy": false, 00:16:27.503 "get_zone_info": false, 00:16:27.503 "zone_management": false, 00:16:27.503 "zone_append": false, 00:16:27.503 "compare": false, 00:16:27.503 "compare_and_write": false, 00:16:27.503 "abort": false, 00:16:27.503 "seek_hole": true, 00:16:27.503 "seek_data": true, 00:16:27.503 "copy": false, 00:16:27.503 "nvme_iov_md": false 00:16:27.503 }, 00:16:27.503 "driver_specific": { 00:16:27.503 "lvol": { 00:16:27.503 "lvol_store_uuid": "44ad26f6-aeca-455f-b275-44bb4ca0428f", 00:16:27.503 "base_bdev": "nvme0n1", 00:16:27.503 "thin_provision": true, 00:16:27.503 "num_allocated_clusters": 0, 00:16:27.503 "snapshot": false, 00:16:27.503 "clone": false, 00:16:27.503 "esnap_clone": false 00:16:27.503 } 00:16:27.503 } 00:16:27.503 } 00:16:27.503 ]' 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:27.503 10:16:33 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:27.760 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:27.760 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0855a63f-fbf2-4a6b-90cd-b2694b81a305 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:28.018 { 00:16:28.018 "name": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:28.018 "aliases": [ 00:16:28.018 "lvs/nvme0n1p0" 00:16:28.018 ], 00:16:28.018 "product_name": "Logical Volume", 00:16:28.018 "block_size": 4096, 00:16:28.018 "num_blocks": 26476544, 00:16:28.018 "uuid": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:28.018 "assigned_rate_limits": { 00:16:28.018 "rw_ios_per_sec": 0, 00:16:28.018 "rw_mbytes_per_sec": 0, 00:16:28.018 "r_mbytes_per_sec": 0, 00:16:28.018 "w_mbytes_per_sec": 0 00:16:28.018 }, 00:16:28.018 "claimed": false, 00:16:28.018 "zoned": false, 00:16:28.018 "supported_io_types": { 00:16:28.018 "read": true, 00:16:28.018 "write": true, 00:16:28.018 "unmap": true, 00:16:28.018 "flush": false, 00:16:28.018 "reset": true, 00:16:28.018 "nvme_admin": false, 00:16:28.018 "nvme_io": false, 00:16:28.018 "nvme_io_md": false, 00:16:28.018 "write_zeroes": true, 00:16:28.018 "zcopy": false, 00:16:28.018 "get_zone_info": false, 00:16:28.018 "zone_management": false, 00:16:28.018 "zone_append": false, 00:16:28.018 "compare": false, 00:16:28.018 "compare_and_write": false, 00:16:28.018 "abort": false, 00:16:28.018 "seek_hole": true, 00:16:28.018 "seek_data": true, 00:16:28.018 "copy": false, 00:16:28.018 "nvme_iov_md": false 00:16:28.018 }, 00:16:28.018 "driver_specific": { 00:16:28.018 "lvol": { 00:16:28.018 "lvol_store_uuid": "44ad26f6-aeca-455f-b275-44bb4ca0428f", 00:16:28.018 "base_bdev": "nvme0n1", 00:16:28.018 "thin_provision": true, 00:16:28.018 "num_allocated_clusters": 0, 00:16:28.018 "snapshot": false, 00:16:28.018 "clone": false, 00:16:28.018 "esnap_clone": false 00:16:28.018 } 00:16:28.018 } 00:16:28.018 } 00:16:28.018 ]' 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:28.018 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:28.019 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:28.019 10:16:33 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 0855a63f-fbf2-4a6b-90cd-b2694b81a305 -c nvc0n1p0 --l2p_dram_limit 60 00:16:28.278 [2024-11-04 10:16:33.830197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.830243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:28.278 [2024-11-04 10:16:33.830257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:28.278 [2024-11-04 10:16:33.830263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.830320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.830328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:28.278 [2024-11-04 10:16:33.830336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:16:28.278 [2024-11-04 10:16:33.830343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.830382] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:28.278 [2024-11-04 10:16:33.830965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:28.278 [2024-11-04 10:16:33.830982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.830989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:28.278 [2024-11-04 10:16:33.830997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:16:28.278 [2024-11-04 10:16:33.831002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.831069] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 12c2caee-717e-43dd-b91f-d18074a63be3 00:16:28.278 [2024-11-04 10:16:33.832094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.832119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:28.278 [2024-11-04 10:16:33.832129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:16:28.278 [2024-11-04 10:16:33.832136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.837313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.837441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:28.278 [2024-11-04 10:16:33.837453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.116 ms 00:16:28.278 [2024-11-04 10:16:33.837461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.837540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.837550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:28.278 [2024-11-04 10:16:33.837556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:28.278 [2024-11-04 10:16:33.837566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.837604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.837612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:28.278 [2024-11-04 10:16:33.837618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:28.278 [2024-11-04 10:16:33.837625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.837654] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:28.278 [2024-11-04 10:16:33.840566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.840660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:28.278 [2024-11-04 10:16:33.840674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.914 ms 00:16:28.278 [2024-11-04 10:16:33.840681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.840727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.840736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:28.278 [2024-11-04 10:16:33.840744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:28.278 [2024-11-04 10:16:33.840750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.840795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:28.278 [2024-11-04 10:16:33.840909] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:28.278 [2024-11-04 10:16:33.840923] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:28.278 [2024-11-04 10:16:33.840932] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:28.278 [2024-11-04 10:16:33.840941] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:28.278 [2024-11-04 10:16:33.840947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:28.278 [2024-11-04 10:16:33.840955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:28.278 [2024-11-04 10:16:33.840961] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:28.278 [2024-11-04 10:16:33.840968] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:28.278 [2024-11-04 10:16:33.840973] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:28.278 [2024-11-04 10:16:33.840981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.840986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:28.278 [2024-11-04 10:16:33.840996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:16:28.278 [2024-11-04 10:16:33.841001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.841075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.278 [2024-11-04 10:16:33.841081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:28.278 [2024-11-04 10:16:33.841089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:16:28.278 [2024-11-04 10:16:33.841094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.278 [2024-11-04 10:16:33.841179] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:28.279 [2024-11-04 10:16:33.841188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:28.279 [2024-11-04 10:16:33.841196] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:28.279 [2024-11-04 10:16:33.841215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:28.279 [2024-11-04 10:16:33.841233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:28.279 [2024-11-04 10:16:33.841245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:28.279 [2024-11-04 10:16:33.841250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:28.279 [2024-11-04 10:16:33.841257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:28.279 [2024-11-04 10:16:33.841262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:28.279 [2024-11-04 10:16:33.841272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:28.279 [2024-11-04 10:16:33.841277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:28.279 [2024-11-04 10:16:33.841292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:28.279 [2024-11-04 10:16:33.841311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:28.279 [2024-11-04 10:16:33.841327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:28.279 [2024-11-04 10:16:33.841344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841356] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:28.279 [2024-11-04 10:16:33.841361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:28.279 [2024-11-04 10:16:33.841380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:28.279 [2024-11-04 10:16:33.841391] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:28.279 [2024-11-04 10:16:33.841406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:28.279 [2024-11-04 10:16:33.841413] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:28.279 [2024-11-04 10:16:33.841418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:28.279 [2024-11-04 10:16:33.841424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:28.279 [2024-11-04 10:16:33.841429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841435] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:28.279 [2024-11-04 10:16:33.841440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:28.279 [2024-11-04 10:16:33.841447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841452] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:28.279 [2024-11-04 10:16:33.841459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:28.279 [2024-11-04 10:16:33.841464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:28.279 [2024-11-04 10:16:33.841479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:28.279 [2024-11-04 10:16:33.841487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:28.279 [2024-11-04 10:16:33.841492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:28.279 [2024-11-04 10:16:33.841499] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:28.279 [2024-11-04 10:16:33.841504] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:28.279 [2024-11-04 10:16:33.841511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:28.279 [2024-11-04 10:16:33.841519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:28.279 [2024-11-04 10:16:33.841527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:28.279 [2024-11-04 10:16:33.841541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:28.279 [2024-11-04 10:16:33.841546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:28.279 [2024-11-04 10:16:33.841553] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:28.279 [2024-11-04 10:16:33.841558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:28.279 [2024-11-04 10:16:33.841565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:28.279 [2024-11-04 10:16:33.841571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:28.279 [2024-11-04 10:16:33.841578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:28.279 [2024-11-04 10:16:33.841583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:28.279 [2024-11-04 10:16:33.841591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:28.279 [2024-11-04 10:16:33.841622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:28.279 [2024-11-04 10:16:33.841629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841635] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:28.279 [2024-11-04 10:16:33.841642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:28.279 [2024-11-04 10:16:33.841648] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:28.279 [2024-11-04 10:16:33.841655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:28.279 [2024-11-04 10:16:33.841661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:28.279 [2024-11-04 10:16:33.841667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:28.279 [2024-11-04 10:16:33.841674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:16:28.279 [2024-11-04 10:16:33.841682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:28.279 [2024-11-04 10:16:33.841753] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:28.279 [2024-11-04 10:16:33.841765] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:31.559 [2024-11-04 10:16:36.803512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.803697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:31.559 [2024-11-04 10:16:36.803771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2961.746 ms 00:16:31.559 [2024-11-04 10:16:36.803814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.829561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.829721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:31.559 [2024-11-04 10:16:36.829793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.486 ms 00:16:31.559 [2024-11-04 10:16:36.829821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.830011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.830079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:31.559 [2024-11-04 10:16:36.830130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:16:31.559 [2024-11-04 10:16:36.830157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.878027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.878232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:31.559 [2024-11-04 10:16:36.878317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.780 ms 00:16:31.559 [2024-11-04 10:16:36.878409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.878492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.878589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:31.559 [2024-11-04 10:16:36.878656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:31.559 [2024-11-04 10:16:36.878691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.879135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.879272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:31.559 [2024-11-04 10:16:36.879342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.327 ms 00:16:31.559 [2024-11-04 10:16:36.879406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.879615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.879723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:31.559 [2024-11-04 10:16:36.879811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:16:31.559 [2024-11-04 10:16:36.879890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.896159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.896298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:31.559 [2024-11-04 10:16:36.896361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.206 ms 00:16:31.559 [2024-11-04 10:16:36.896388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.907850] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:31.559 [2024-11-04 10:16:36.922681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.922863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:31.559 [2024-11-04 10:16:36.922973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.078 ms 00:16:31.559 [2024-11-04 10:16:36.923005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.970453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.970585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:31.559 [2024-11-04 10:16:36.970641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.386 ms 00:16:31.559 [2024-11-04 10:16:36.970664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.970866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.970938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:31.559 [2024-11-04 10:16:36.970966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:16:31.559 [2024-11-04 10:16:36.970986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:36.994180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:36.994314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:31.559 [2024-11-04 10:16:36.994372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.102 ms 00:16:31.559 [2024-11-04 10:16:36.994397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.016997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.017103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:31.559 [2024-11-04 10:16:37.017165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.463 ms 00:16:31.559 [2024-11-04 10:16:37.017185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.017760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.017858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:31.559 [2024-11-04 10:16:37.017908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:16:31.559 [2024-11-04 10:16:37.017931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.093328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.093524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:31.559 [2024-11-04 10:16:37.093610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.336 ms 00:16:31.559 [2024-11-04 10:16:37.093649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.126391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.126534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:31.559 [2024-11-04 10:16:37.126556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.541 ms 00:16:31.559 [2024-11-04 10:16:37.126565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.149525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.149565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:31.559 [2024-11-04 10:16:37.149579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.913 ms 00:16:31.559 [2024-11-04 10:16:37.149586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.172513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.172635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:31.559 [2024-11-04 10:16:37.172654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.871 ms 00:16:31.559 [2024-11-04 10:16:37.172662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.172713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.559 [2024-11-04 10:16:37.172723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:31.559 [2024-11-04 10:16:37.172735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:31.559 [2024-11-04 10:16:37.172742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.559 [2024-11-04 10:16:37.172850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:31.560 [2024-11-04 10:16:37.172861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:31.560 [2024-11-04 10:16:37.172870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:31.560 [2024-11-04 10:16:37.172878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:31.560 [2024-11-04 10:16:37.173762] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3343.155 ms, result 0 00:16:31.560 { 00:16:31.560 "name": "ftl0", 00:16:31.560 "uuid": "12c2caee-717e-43dd-b91f-d18074a63be3" 00:16:31.560 } 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:31.560 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.817 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:32.075 [ 00:16:32.075 { 00:16:32.075 "name": "ftl0", 00:16:32.075 "aliases": [ 00:16:32.075 "12c2caee-717e-43dd-b91f-d18074a63be3" 00:16:32.075 ], 00:16:32.075 "product_name": "FTL disk", 00:16:32.075 "block_size": 4096, 00:16:32.075 "num_blocks": 20971520, 00:16:32.075 "uuid": "12c2caee-717e-43dd-b91f-d18074a63be3", 00:16:32.075 "assigned_rate_limits": { 00:16:32.075 "rw_ios_per_sec": 0, 00:16:32.075 "rw_mbytes_per_sec": 0, 00:16:32.075 "r_mbytes_per_sec": 0, 00:16:32.075 "w_mbytes_per_sec": 0 00:16:32.075 }, 00:16:32.075 "claimed": false, 00:16:32.075 "zoned": false, 00:16:32.075 "supported_io_types": { 00:16:32.075 "read": true, 00:16:32.075 "write": true, 00:16:32.075 "unmap": true, 00:16:32.075 "flush": true, 00:16:32.075 "reset": false, 00:16:32.075 "nvme_admin": false, 00:16:32.075 "nvme_io": false, 00:16:32.075 "nvme_io_md": false, 00:16:32.075 "write_zeroes": true, 00:16:32.075 "zcopy": false, 00:16:32.075 "get_zone_info": false, 00:16:32.075 "zone_management": false, 00:16:32.075 "zone_append": false, 00:16:32.075 "compare": false, 00:16:32.075 "compare_and_write": false, 00:16:32.075 "abort": false, 00:16:32.075 "seek_hole": false, 00:16:32.075 "seek_data": false, 00:16:32.075 "copy": false, 00:16:32.075 "nvme_iov_md": false 00:16:32.075 }, 00:16:32.075 "driver_specific": { 00:16:32.075 "ftl": { 00:16:32.075 "base_bdev": "0855a63f-fbf2-4a6b-90cd-b2694b81a305", 00:16:32.075 "cache": "nvc0n1p0" 00:16:32.075 } 00:16:32.075 } 00:16:32.075 } 00:16:32.075 ] 00:16:32.075 10:16:37 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:16:32.075 10:16:37 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:32.075 10:16:37 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:32.075 10:16:37 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:32.075 10:16:37 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:32.334 [2024-11-04 10:16:37.970983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:37.971140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:32.334 [2024-11-04 10:16:37.971160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:32.334 [2024-11-04 10:16:37.971170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:37.971210] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:32.334 [2024-11-04 10:16:37.973842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:37.973875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:32.334 [2024-11-04 10:16:37.973887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:16:32.334 [2024-11-04 10:16:37.973895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:37.974408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:37.974426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:32.334 [2024-11-04 10:16:37.974437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.469 ms 00:16:32.334 [2024-11-04 10:16:37.974444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:37.977691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:37.977800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:32.334 [2024-11-04 10:16:37.977819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.216 ms 00:16:32.334 [2024-11-04 10:16:37.977827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:37.983974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:37.983999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:32.334 [2024-11-04 10:16:37.984010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.115 ms 00:16:32.334 [2024-11-04 10:16:37.984017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:38.007866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:38.007908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:32.334 [2024-11-04 10:16:38.007920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.760 ms 00:16:32.334 [2024-11-04 10:16:38.007928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:38.022769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:38.022813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:32.334 [2024-11-04 10:16:38.022826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.774 ms 00:16:32.334 [2024-11-04 10:16:38.022834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:38.023038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:38.023049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:32.334 [2024-11-04 10:16:38.023059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:16:32.334 [2024-11-04 10:16:38.023066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:38.046147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:38.046181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:32.334 [2024-11-04 10:16:38.046193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.048 ms 00:16:32.334 [2024-11-04 10:16:38.046200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.334 [2024-11-04 10:16:38.068923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.334 [2024-11-04 10:16:38.069052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:32.334 [2024-11-04 10:16:38.069071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.674 ms 00:16:32.334 [2024-11-04 10:16:38.069079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.593 [2024-11-04 10:16:38.091465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.593 [2024-11-04 10:16:38.091494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:32.593 [2024-11-04 10:16:38.091506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.338 ms 00:16:32.593 [2024-11-04 10:16:38.091514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.593 [2024-11-04 10:16:38.113856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.593 [2024-11-04 10:16:38.113886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:32.593 [2024-11-04 10:16:38.113898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.253 ms 00:16:32.593 [2024-11-04 10:16:38.113905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.593 [2024-11-04 10:16:38.113942] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:32.593 [2024-11-04 10:16:38.113956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.113967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.113974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.113983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.113991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:32.593 [2024-11-04 10:16:38.114450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:32.594 [2024-11-04 10:16:38.114833] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:32.594 [2024-11-04 10:16:38.114843] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 12c2caee-717e-43dd-b91f-d18074a63be3 00:16:32.594 [2024-11-04 10:16:38.114851] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:32.594 [2024-11-04 10:16:38.114861] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:32.594 [2024-11-04 10:16:38.114869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:32.594 [2024-11-04 10:16:38.114878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:32.594 [2024-11-04 10:16:38.114884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:32.594 [2024-11-04 10:16:38.114896] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:32.594 [2024-11-04 10:16:38.114902] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:32.594 [2024-11-04 10:16:38.114924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:32.594 [2024-11-04 10:16:38.114931] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:32.594 [2024-11-04 10:16:38.114940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.594 [2024-11-04 10:16:38.114947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:32.594 [2024-11-04 10:16:38.114957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.998 ms 00:16:32.594 [2024-11-04 10:16:38.114964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.127341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.594 [2024-11-04 10:16:38.127374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:32.594 [2024-11-04 10:16:38.127388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.332 ms 00:16:32.594 [2024-11-04 10:16:38.127397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.127746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:32.594 [2024-11-04 10:16:38.127754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:32.594 [2024-11-04 10:16:38.127764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:16:32.594 [2024-11-04 10:16:38.127771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.177944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.178121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:32.594 [2024-11-04 10:16:38.178147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.178156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.178229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.178238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:32.594 [2024-11-04 10:16:38.178248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.178256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.178372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.178383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:32.594 [2024-11-04 10:16:38.178393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.178402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.178437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.178445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:32.594 [2024-11-04 10:16:38.178454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.178462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.260492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.260538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:32.594 [2024-11-04 10:16:38.260552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.260562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.317742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.317803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:32.594 [2024-11-04 10:16:38.317815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.317822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.317896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.317904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:32.594 [2024-11-04 10:16:38.317911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.317917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.317989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.317996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:32.594 [2024-11-04 10:16:38.318004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.318009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.318106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.318114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:32.594 [2024-11-04 10:16:38.318122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.594 [2024-11-04 10:16:38.318127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.594 [2024-11-04 10:16:38.318170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.594 [2024-11-04 10:16:38.318179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:32.595 [2024-11-04 10:16:38.318188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.595 [2024-11-04 10:16:38.318194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.595 [2024-11-04 10:16:38.318240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.595 [2024-11-04 10:16:38.318247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:32.595 [2024-11-04 10:16:38.318254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.595 [2024-11-04 10:16:38.318260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.595 [2024-11-04 10:16:38.318304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:32.595 [2024-11-04 10:16:38.318311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:32.595 [2024-11-04 10:16:38.318318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:32.595 [2024-11-04 10:16:38.318324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:32.595 [2024-11-04 10:16:38.318461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.462 ms, result 0 00:16:32.595 true 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72385 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72385 ']' 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72385 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72385 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:32.853 killing process with pid 72385 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72385' 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72385 00:16:32.853 10:16:38 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72385 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:38.113 10:16:43 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:38.113 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:38.113 fio-3.35 00:16:38.113 Starting 1 thread 00:16:43.370 00:16:43.370 test: (groupid=0, jobs=1): err= 0: pid=72572: Mon Nov 4 10:16:48 2024 00:16:43.370 read: IOPS=1041, BW=69.1MiB/s (72.5MB/s)(255MiB/3681msec) 00:16:43.370 slat (nsec): min=3009, max=25708, avg=4294.34, stdev=1923.19 00:16:43.370 clat (usec): min=232, max=1297, avg=434.34, stdev=189.06 00:16:43.370 lat (usec): min=236, max=1309, avg=438.64, stdev=189.74 00:16:43.370 clat percentiles (usec): 00:16:43.370 | 1.00th=[ 289], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 318], 00:16:43.370 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 355], 00:16:43.370 | 70.00th=[ 461], 80.00th=[ 529], 90.00th=[ 791], 95.00th=[ 873], 00:16:43.370 | 99.00th=[ 1045], 99.50th=[ 1106], 99.90th=[ 1188], 99.95th=[ 1254], 00:16:43.370 | 99.99th=[ 1303] 00:16:43.370 write: IOPS=1048, BW=69.6MiB/s (73.0MB/s)(256MiB/3678msec); 0 zone resets 00:16:43.370 slat (nsec): min=13606, max=55348, avg=18275.36, stdev=3626.27 00:16:43.370 clat (usec): min=284, max=2473, avg=487.41, stdev=233.46 00:16:43.370 lat (usec): min=312, max=2489, avg=505.68, stdev=234.45 00:16:43.370 clat percentiles (usec): 00:16:43.370 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 334], 20.00th=[ 343], 00:16:43.370 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 363], 60.00th=[ 412], 00:16:43.370 | 70.00th=[ 537], 80.00th=[ 611], 90.00th=[ 881], 95.00th=[ 955], 00:16:43.370 | 99.00th=[ 1270], 99.50th=[ 1598], 99.90th=[ 1876], 99.95th=[ 2089], 00:16:43.370 | 99.99th=[ 2474] 00:16:43.370 bw ( KiB/s): min=41072, max=93840, per=98.25%, avg=70040.00, stdev=22517.89, samples=7 00:16:43.370 iops : min= 604, max= 1380, avg=1030.00, stdev=331.15, samples=7 00:16:43.370 lat (usec) : 250=0.03%, 500=70.35%, 750=17.32%, 1000=9.79% 00:16:43.370 lat (msec) : 2=2.48%, 4=0.03% 00:16:43.370 cpu : usr=99.29%, sys=0.11%, ctx=8, majf=0, minf=1169 00:16:43.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:43.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:43.370 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:43.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:43.370 00:16:43.370 Run status group 0 (all jobs): 00:16:43.370 READ: bw=69.1MiB/s (72.5MB/s), 69.1MiB/s-69.1MiB/s (72.5MB/s-72.5MB/s), io=255MiB (267MB), run=3681-3681msec 00:16:43.370 WRITE: bw=69.6MiB/s (73.0MB/s), 69.6MiB/s-69.6MiB/s (73.0MB/s-73.0MB/s), io=256MiB (269MB), run=3678-3678msec 00:16:44.307 ----------------------------------------------------- 00:16:44.307 Suppressions used: 00:16:44.307 count bytes template 00:16:44.307 1 5 /usr/src/fio/parse.c 00:16:44.307 1 8 libtcmalloc_minimal.so 00:16:44.307 1 904 libcrypto.so 00:16:44.307 ----------------------------------------------------- 00:16:44.307 00:16:44.307 10:16:49 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:44.307 10:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:44.307 10:16:49 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:44.307 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:44.565 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:44.565 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:44.565 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:44.565 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:44.565 10:16:50 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:44.565 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:44.565 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:44.565 fio-3.35 00:16:44.565 Starting 2 threads 00:17:11.110 00:17:11.110 first_half: (groupid=0, jobs=1): err= 0: pid=72671: Mon Nov 4 10:17:12 2024 00:17:11.110 read: IOPS=3028, BW=11.8MiB/s (12.4MB/s)(256MiB/21621msec) 00:17:11.110 slat (nsec): min=3015, max=47195, avg=4545.95, stdev=1299.64 00:17:11.110 clat (usec): min=530, max=260051, avg=35662.64, stdev=21369.85 00:17:11.110 lat (usec): min=536, max=260057, avg=35667.18, stdev=21369.96 00:17:11.110 clat percentiles (msec): 00:17:11.110 | 1.00th=[ 9], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 29], 00:17:11.110 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:17:11.110 | 70.00th=[ 34], 80.00th=[ 35], 90.00th=[ 41], 95.00th=[ 69], 00:17:11.110 | 99.00th=[ 148], 99.50th=[ 155], 99.90th=[ 199], 99.95th=[ 234], 00:17:11.110 | 99.99th=[ 257] 00:17:11.110 write: IOPS=3036, BW=11.9MiB/s (12.4MB/s)(256MiB/21584msec); 0 zone resets 00:17:11.110 slat (usec): min=3, max=256, avg= 5.92, stdev= 2.71 00:17:11.110 clat (usec): min=330, max=40548, avg=6577.28, stdev=6475.66 00:17:11.110 lat (usec): min=337, max=40554, avg=6583.20, stdev=6475.78 00:17:11.110 clat percentiles (usec): 00:17:11.110 | 1.00th=[ 725], 5.00th=[ 971], 10.00th=[ 1270], 20.00th=[ 2802], 00:17:11.110 | 30.00th=[ 3490], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 5407], 00:17:11.110 | 70.00th=[ 5866], 80.00th=[ 7046], 90.00th=[16450], 95.00th=[21890], 00:17:11.110 | 99.00th=[31065], 99.50th=[32637], 99.90th=[38011], 99.95th=[39060], 00:17:11.110 | 99.99th=[40109] 00:17:11.110 bw ( KiB/s): min= 128, max=53872, per=97.41%, avg=23661.82, stdev=15748.00, samples=22 00:17:11.110 iops : min= 32, max=13468, avg=5915.45, stdev=3937.00, samples=22 00:17:11.110 lat (usec) : 500=0.05%, 750=0.65%, 1000=2.04% 00:17:11.110 lat (msec) : 2=4.46%, 4=11.06%, 10=25.30%, 20=4.92%, 50=48.23% 00:17:11.110 lat (msec) : 100=1.63%, 250=1.65%, 500=0.01% 00:17:11.110 cpu : usr=99.27%, sys=0.11%, ctx=38, majf=0, minf=5540 00:17:11.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.110 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.110 issued rwts: total=65475,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.110 second_half: (groupid=0, jobs=1): err= 0: pid=72672: Mon Nov 4 10:17:12 2024 00:17:11.110 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(256MiB/21428msec) 00:17:11.110 slat (nsec): min=3034, max=25080, avg=3864.87, stdev=758.25 00:17:11.110 clat (msec): min=9, max=186, avg=35.99, stdev=19.51 00:17:11.110 lat (msec): min=9, max=186, avg=35.99, stdev=19.51 00:17:11.110 clat percentiles (msec): 00:17:11.110 | 1.00th=[ 27], 5.00th=[ 29], 10.00th=[ 29], 20.00th=[ 30], 00:17:11.110 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 32], 00:17:11.110 | 70.00th=[ 34], 80.00th=[ 36], 90.00th=[ 41], 95.00th=[ 65], 00:17:11.110 | 99.00th=[ 144], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 169], 00:17:11.110 | 99.99th=[ 178] 00:17:11.110 write: IOPS=3075, BW=12.0MiB/s (12.6MB/s)(256MiB/21310msec); 0 zone resets 00:17:11.110 slat (usec): min=3, max=1044, avg= 5.30, stdev= 6.36 00:17:11.110 clat (usec): min=376, max=32985, avg=5872.67, stdev=4515.41 00:17:11.110 lat (usec): min=384, max=32991, avg=5877.98, stdev=4515.87 00:17:11.110 clat percentiles (usec): 00:17:11.110 | 1.00th=[ 857], 5.00th=[ 1795], 10.00th=[ 2442], 20.00th=[ 3032], 00:17:11.110 | 30.00th=[ 3589], 40.00th=[ 4178], 50.00th=[ 4752], 60.00th=[ 5211], 00:17:11.110 | 70.00th=[ 5604], 80.00th=[ 6456], 90.00th=[11731], 95.00th=[17433], 00:17:11.110 | 99.00th=[22938], 99.50th=[25297], 99.90th=[30278], 99.95th=[31327], 00:17:11.110 | 99.99th=[32113] 00:17:11.110 bw ( KiB/s): min= 88, max=44792, per=100.00%, avg=26110.80, stdev=15389.98, samples=20 00:17:11.110 iops : min= 22, max=11198, avg=6527.70, stdev=3847.49, samples=20 00:17:11.110 lat (usec) : 500=0.03%, 750=0.25%, 1000=0.42% 00:17:11.110 lat (msec) : 2=2.29%, 4=15.12%, 10=25.48%, 20=5.46%, 50=47.79% 00:17:11.110 lat (msec) : 100=1.56%, 250=1.62% 00:17:11.110 cpu : usr=99.37%, sys=0.14%, ctx=46, majf=0, minf=5573 00:17:11.110 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:17:11.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:11.110 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:11.110 issued rwts: total=65489,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:11.110 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:11.110 00:17:11.110 Run status group 0 (all jobs): 00:17:11.110 READ: bw=23.7MiB/s (24.8MB/s), 11.8MiB/s-11.9MiB/s (12.4MB/s-12.5MB/s), io=512MiB (536MB), run=21428-21621msec 00:17:11.110 WRITE: bw=23.7MiB/s (24.9MB/s), 11.9MiB/s-12.0MiB/s (12.4MB/s-12.6MB/s), io=512MiB (537MB), run=21310-21584msec 00:17:11.110 ----------------------------------------------------- 00:17:11.110 Suppressions used: 00:17:11.110 count bytes template 00:17:11.110 2 10 /usr/src/fio/parse.c 00:17:11.110 3 288 /usr/src/fio/iolog.c 00:17:11.110 1 8 libtcmalloc_minimal.so 00:17:11.110 1 904 libcrypto.so 00:17:11.110 ----------------------------------------------------- 00:17:11.110 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:11.110 10:17:15 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:17:11.110 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:11.110 fio-3.35 00:17:11.110 Starting 1 thread 00:17:25.978 00:17:25.978 test: (groupid=0, jobs=1): err= 0: pid=72964: Mon Nov 4 10:17:29 2024 00:17:25.978 read: IOPS=8328, BW=32.5MiB/s (34.1MB/s)(255MiB/7829msec) 00:17:25.978 slat (nsec): min=2987, max=23131, avg=3625.24, stdev=806.86 00:17:25.978 clat (usec): min=528, max=30131, avg=15361.94, stdev=1444.10 00:17:25.978 lat (usec): min=532, max=30134, avg=15365.56, stdev=1444.19 00:17:25.978 clat percentiles (usec): 00:17:25.978 | 1.00th=[14353], 5.00th=[14484], 10.00th=[14484], 20.00th=[14615], 00:17:25.978 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[15270], 00:17:25.978 | 70.00th=[15270], 80.00th=[15401], 90.00th=[15795], 95.00th=[18482], 00:17:25.978 | 99.00th=[21627], 99.50th=[22938], 99.90th=[26870], 99.95th=[27395], 00:17:25.978 | 99.99th=[29230] 00:17:25.978 write: IOPS=13.7k, BW=53.7MiB/s (56.3MB/s)(256MiB/4770msec); 0 zone resets 00:17:25.978 slat (usec): min=4, max=915, avg= 7.26, stdev= 4.40 00:17:25.978 clat (usec): min=484, max=52233, avg=9266.90, stdev=10312.91 00:17:25.978 lat (usec): min=491, max=52241, avg=9274.16, stdev=10312.91 00:17:25.978 clat percentiles (usec): 00:17:25.978 | 1.00th=[ 660], 5.00th=[ 766], 10.00th=[ 848], 20.00th=[ 979], 00:17:25.978 | 30.00th=[ 1139], 40.00th=[ 2147], 50.00th=[ 7177], 60.00th=[ 8586], 00:17:25.978 | 70.00th=[10159], 80.00th=[12649], 90.00th=[27657], 95.00th=[31065], 00:17:25.978 | 99.00th=[41157], 99.50th=[44303], 99.90th=[49021], 99.95th=[50070], 00:17:25.978 | 99.99th=[51643] 00:17:25.978 bw ( KiB/s): min=32184, max=65744, per=95.38%, avg=52420.30, stdev=10843.00, samples=10 00:17:25.978 iops : min= 8046, max=16436, avg=13105.00, stdev=2710.83, samples=10 00:17:25.978 lat (usec) : 500=0.01%, 750=2.03%, 1000=8.87% 00:17:25.978 lat (msec) : 2=8.97%, 4=1.02%, 10=13.57%, 20=56.30%, 50=9.21% 00:17:25.978 lat (msec) : 100=0.03% 00:17:25.978 cpu : usr=99.10%, sys=0.16%, ctx=28, majf=0, minf=5566 00:17:25.978 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:25.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:25.978 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:25.978 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:25.978 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:25.978 00:17:25.978 Run status group 0 (all jobs): 00:17:25.978 READ: bw=32.5MiB/s (34.1MB/s), 32.5MiB/s-32.5MiB/s (34.1MB/s-34.1MB/s), io=255MiB (267MB), run=7829-7829msec 00:17:25.978 WRITE: bw=53.7MiB/s (56.3MB/s), 53.7MiB/s-53.7MiB/s (56.3MB/s-56.3MB/s), io=256MiB (268MB), run=4770-4770msec 00:17:25.978 ----------------------------------------------------- 00:17:25.978 Suppressions used: 00:17:25.978 count bytes template 00:17:25.978 1 5 /usr/src/fio/parse.c 00:17:25.978 2 192 /usr/src/fio/iolog.c 00:17:25.978 1 8 libtcmalloc_minimal.so 00:17:25.978 1 904 libcrypto.so 00:17:25.978 ----------------------------------------------------- 00:17:25.978 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:25.978 Remove shared memory files 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57127 /dev/shm/spdk_tgt_trace.pid71311 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:25.978 ************************************ 00:17:25.978 END TEST ftl_fio_basic 00:17:25.978 ************************************ 00:17:25.978 00:17:25.978 real 1m0.505s 00:17:25.978 user 2m3.490s 00:17:25.978 sys 0m11.056s 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:25.978 10:17:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:25.978 10:17:30 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:25.978 10:17:30 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:25.978 10:17:30 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:25.978 10:17:30 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:25.978 ************************************ 00:17:25.978 START TEST ftl_bdevperf 00:17:25.979 ************************************ 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:25.979 * Looking for test storage... 00:17:25.979 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.979 --rc genhtml_branch_coverage=1 00:17:25.979 --rc genhtml_function_coverage=1 00:17:25.979 --rc genhtml_legend=1 00:17:25.979 --rc geninfo_all_blocks=1 00:17:25.979 --rc geninfo_unexecuted_blocks=1 00:17:25.979 00:17:25.979 ' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.979 --rc genhtml_branch_coverage=1 00:17:25.979 --rc genhtml_function_coverage=1 00:17:25.979 --rc genhtml_legend=1 00:17:25.979 --rc geninfo_all_blocks=1 00:17:25.979 --rc geninfo_unexecuted_blocks=1 00:17:25.979 00:17:25.979 ' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.979 --rc genhtml_branch_coverage=1 00:17:25.979 --rc genhtml_function_coverage=1 00:17:25.979 --rc genhtml_legend=1 00:17:25.979 --rc geninfo_all_blocks=1 00:17:25.979 --rc geninfo_unexecuted_blocks=1 00:17:25.979 00:17:25.979 ' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:25.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:25.979 --rc genhtml_branch_coverage=1 00:17:25.979 --rc genhtml_function_coverage=1 00:17:25.979 --rc genhtml_legend=1 00:17:25.979 --rc geninfo_all_blocks=1 00:17:25.979 --rc geninfo_unexecuted_blocks=1 00:17:25.979 00:17:25.979 ' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73201 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73201 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73201 ']' 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:25.979 10:17:30 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:25.979 [2024-11-04 10:17:30.993126] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:17:25.980 [2024-11-04 10:17:30.993394] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73201 ] 00:17:25.980 [2024-11-04 10:17:31.157286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.980 [2024-11-04 10:17:31.252522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:26.237 10:17:31 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:26.495 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:26.752 { 00:17:26.752 "name": "nvme0n1", 00:17:26.752 "aliases": [ 00:17:26.752 "647b28b3-1f44-4834-b85d-134c74a2b31f" 00:17:26.752 ], 00:17:26.752 "product_name": "NVMe disk", 00:17:26.752 "block_size": 4096, 00:17:26.752 "num_blocks": 1310720, 00:17:26.752 "uuid": "647b28b3-1f44-4834-b85d-134c74a2b31f", 00:17:26.752 "numa_id": -1, 00:17:26.752 "assigned_rate_limits": { 00:17:26.752 "rw_ios_per_sec": 0, 00:17:26.752 "rw_mbytes_per_sec": 0, 00:17:26.752 "r_mbytes_per_sec": 0, 00:17:26.752 "w_mbytes_per_sec": 0 00:17:26.752 }, 00:17:26.752 "claimed": true, 00:17:26.752 "claim_type": "read_many_write_one", 00:17:26.752 "zoned": false, 00:17:26.752 "supported_io_types": { 00:17:26.752 "read": true, 00:17:26.752 "write": true, 00:17:26.752 "unmap": true, 00:17:26.752 "flush": true, 00:17:26.752 "reset": true, 00:17:26.752 "nvme_admin": true, 00:17:26.752 "nvme_io": true, 00:17:26.752 "nvme_io_md": false, 00:17:26.752 "write_zeroes": true, 00:17:26.752 "zcopy": false, 00:17:26.752 "get_zone_info": false, 00:17:26.752 "zone_management": false, 00:17:26.752 "zone_append": false, 00:17:26.752 "compare": true, 00:17:26.752 "compare_and_write": false, 00:17:26.752 "abort": true, 00:17:26.752 "seek_hole": false, 00:17:26.752 "seek_data": false, 00:17:26.752 "copy": true, 00:17:26.752 "nvme_iov_md": false 00:17:26.752 }, 00:17:26.752 "driver_specific": { 00:17:26.752 "nvme": [ 00:17:26.752 { 00:17:26.752 "pci_address": "0000:00:11.0", 00:17:26.752 "trid": { 00:17:26.752 "trtype": "PCIe", 00:17:26.752 "traddr": "0000:00:11.0" 00:17:26.752 }, 00:17:26.752 "ctrlr_data": { 00:17:26.752 "cntlid": 0, 00:17:26.752 "vendor_id": "0x1b36", 00:17:26.752 "model_number": "QEMU NVMe Ctrl", 00:17:26.752 "serial_number": "12341", 00:17:26.752 "firmware_revision": "8.0.0", 00:17:26.752 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:26.752 "oacs": { 00:17:26.752 "security": 0, 00:17:26.752 "format": 1, 00:17:26.752 "firmware": 0, 00:17:26.752 "ns_manage": 1 00:17:26.752 }, 00:17:26.752 "multi_ctrlr": false, 00:17:26.752 "ana_reporting": false 00:17:26.752 }, 00:17:26.752 "vs": { 00:17:26.752 "nvme_version": "1.4" 00:17:26.752 }, 00:17:26.752 "ns_data": { 00:17:26.752 "id": 1, 00:17:26.752 "can_share": false 00:17:26.752 } 00:17:26.752 } 00:17:26.752 ], 00:17:26.752 "mp_policy": "active_passive" 00:17:26.752 } 00:17:26.752 } 00:17:26.752 ]' 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:26.752 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:27.010 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=44ad26f6-aeca-455f-b275-44bb4ca0428f 00:17:27.010 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:27.010 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 44ad26f6-aeca-455f-b275-44bb4ca0428f 00:17:27.268 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:27.268 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=0e27b6a8-04ad-4709-8bfc-946438d33a50 00:17:27.268 10:17:32 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 0e27b6a8-04ad-4709-8bfc-946438d33a50 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:27.528 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:27.785 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:27.785 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:27.785 { 00:17:27.785 "name": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:27.785 "aliases": [ 00:17:27.785 "lvs/nvme0n1p0" 00:17:27.785 ], 00:17:27.785 "product_name": "Logical Volume", 00:17:27.785 "block_size": 4096, 00:17:27.785 "num_blocks": 26476544, 00:17:27.785 "uuid": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:27.785 "assigned_rate_limits": { 00:17:27.785 "rw_ios_per_sec": 0, 00:17:27.785 "rw_mbytes_per_sec": 0, 00:17:27.785 "r_mbytes_per_sec": 0, 00:17:27.785 "w_mbytes_per_sec": 0 00:17:27.785 }, 00:17:27.785 "claimed": false, 00:17:27.785 "zoned": false, 00:17:27.785 "supported_io_types": { 00:17:27.785 "read": true, 00:17:27.785 "write": true, 00:17:27.785 "unmap": true, 00:17:27.785 "flush": false, 00:17:27.786 "reset": true, 00:17:27.786 "nvme_admin": false, 00:17:27.786 "nvme_io": false, 00:17:27.786 "nvme_io_md": false, 00:17:27.786 "write_zeroes": true, 00:17:27.786 "zcopy": false, 00:17:27.786 "get_zone_info": false, 00:17:27.786 "zone_management": false, 00:17:27.786 "zone_append": false, 00:17:27.786 "compare": false, 00:17:27.786 "compare_and_write": false, 00:17:27.786 "abort": false, 00:17:27.786 "seek_hole": true, 00:17:27.786 "seek_data": true, 00:17:27.786 "copy": false, 00:17:27.786 "nvme_iov_md": false 00:17:27.786 }, 00:17:27.786 "driver_specific": { 00:17:27.786 "lvol": { 00:17:27.786 "lvol_store_uuid": "0e27b6a8-04ad-4709-8bfc-946438d33a50", 00:17:27.786 "base_bdev": "nvme0n1", 00:17:27.786 "thin_provision": true, 00:17:27.786 "num_allocated_clusters": 0, 00:17:27.786 "snapshot": false, 00:17:27.786 "clone": false, 00:17:27.786 "esnap_clone": false 00:17:27.786 } 00:17:27.786 } 00:17:27.786 } 00:17:27.786 ]' 00:17:27.786 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:27.786 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:27.786 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:27.786 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:27.786 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:28.043 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.299 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:28.299 { 00:17:28.299 "name": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:28.299 "aliases": [ 00:17:28.299 "lvs/nvme0n1p0" 00:17:28.299 ], 00:17:28.299 "product_name": "Logical Volume", 00:17:28.299 "block_size": 4096, 00:17:28.299 "num_blocks": 26476544, 00:17:28.299 "uuid": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:28.299 "assigned_rate_limits": { 00:17:28.299 "rw_ios_per_sec": 0, 00:17:28.299 "rw_mbytes_per_sec": 0, 00:17:28.299 "r_mbytes_per_sec": 0, 00:17:28.299 "w_mbytes_per_sec": 0 00:17:28.299 }, 00:17:28.299 "claimed": false, 00:17:28.299 "zoned": false, 00:17:28.299 "supported_io_types": { 00:17:28.299 "read": true, 00:17:28.299 "write": true, 00:17:28.299 "unmap": true, 00:17:28.299 "flush": false, 00:17:28.299 "reset": true, 00:17:28.299 "nvme_admin": false, 00:17:28.299 "nvme_io": false, 00:17:28.299 "nvme_io_md": false, 00:17:28.299 "write_zeroes": true, 00:17:28.299 "zcopy": false, 00:17:28.299 "get_zone_info": false, 00:17:28.299 "zone_management": false, 00:17:28.299 "zone_append": false, 00:17:28.299 "compare": false, 00:17:28.299 "compare_and_write": false, 00:17:28.299 "abort": false, 00:17:28.299 "seek_hole": true, 00:17:28.299 "seek_data": true, 00:17:28.299 "copy": false, 00:17:28.299 "nvme_iov_md": false 00:17:28.299 }, 00:17:28.299 "driver_specific": { 00:17:28.299 "lvol": { 00:17:28.299 "lvol_store_uuid": "0e27b6a8-04ad-4709-8bfc-946438d33a50", 00:17:28.299 "base_bdev": "nvme0n1", 00:17:28.299 "thin_provision": true, 00:17:28.299 "num_allocated_clusters": 0, 00:17:28.299 "snapshot": false, 00:17:28.299 "clone": false, 00:17:28.299 "esnap_clone": false 00:17:28.299 } 00:17:28.299 } 00:17:28.299 } 00:17:28.299 ]' 00:17:28.299 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:28.299 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:28.299 10:17:33 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:28.299 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:28.299 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:28.299 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:28.299 10:17:34 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:28.300 10:17:34 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:28.557 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b1f344df-b7a7-49e8-8049-5c45be85d663 00:17:28.814 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:28.814 { 00:17:28.814 "name": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:28.814 "aliases": [ 00:17:28.814 "lvs/nvme0n1p0" 00:17:28.814 ], 00:17:28.814 "product_name": "Logical Volume", 00:17:28.814 "block_size": 4096, 00:17:28.814 "num_blocks": 26476544, 00:17:28.814 "uuid": "b1f344df-b7a7-49e8-8049-5c45be85d663", 00:17:28.814 "assigned_rate_limits": { 00:17:28.814 "rw_ios_per_sec": 0, 00:17:28.814 "rw_mbytes_per_sec": 0, 00:17:28.814 "r_mbytes_per_sec": 0, 00:17:28.815 "w_mbytes_per_sec": 0 00:17:28.815 }, 00:17:28.815 "claimed": false, 00:17:28.815 "zoned": false, 00:17:28.815 "supported_io_types": { 00:17:28.815 "read": true, 00:17:28.815 "write": true, 00:17:28.815 "unmap": true, 00:17:28.815 "flush": false, 00:17:28.815 "reset": true, 00:17:28.815 "nvme_admin": false, 00:17:28.815 "nvme_io": false, 00:17:28.815 "nvme_io_md": false, 00:17:28.815 "write_zeroes": true, 00:17:28.815 "zcopy": false, 00:17:28.815 "get_zone_info": false, 00:17:28.815 "zone_management": false, 00:17:28.815 "zone_append": false, 00:17:28.815 "compare": false, 00:17:28.815 "compare_and_write": false, 00:17:28.815 "abort": false, 00:17:28.815 "seek_hole": true, 00:17:28.815 "seek_data": true, 00:17:28.815 "copy": false, 00:17:28.815 "nvme_iov_md": false 00:17:28.815 }, 00:17:28.815 "driver_specific": { 00:17:28.815 "lvol": { 00:17:28.815 "lvol_store_uuid": "0e27b6a8-04ad-4709-8bfc-946438d33a50", 00:17:28.815 "base_bdev": "nvme0n1", 00:17:28.815 "thin_provision": true, 00:17:28.815 "num_allocated_clusters": 0, 00:17:28.815 "snapshot": false, 00:17:28.815 "clone": false, 00:17:28.815 "esnap_clone": false 00:17:28.815 } 00:17:28.815 } 00:17:28.815 } 00:17:28.815 ]' 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:28.815 10:17:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b1f344df-b7a7-49e8-8049-5c45be85d663 -c nvc0n1p0 --l2p_dram_limit 20 00:17:29.074 [2024-11-04 10:17:34.658606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.658649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:29.074 [2024-11-04 10:17:34.658661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:29.074 [2024-11-04 10:17:34.658669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.658710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.658719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:29.074 [2024-11-04 10:17:34.658725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:17:29.074 [2024-11-04 10:17:34.658733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.658746] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:29.074 [2024-11-04 10:17:34.659345] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:29.074 [2024-11-04 10:17:34.659362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.659372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:29.074 [2024-11-04 10:17:34.659379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:17:29.074 [2024-11-04 10:17:34.659386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.659437] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7b8eb5cd-03b1-42ae-a683-94ece20764b9 00:17:29.074 [2024-11-04 10:17:34.660394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.660491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:29.074 [2024-11-04 10:17:34.660506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:29.074 [2024-11-04 10:17:34.660514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.665188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.665214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:29.074 [2024-11-04 10:17:34.665223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.640 ms 00:17:29.074 [2024-11-04 10:17:34.665229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.665296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.665304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:29.074 [2024-11-04 10:17:34.665315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:29.074 [2024-11-04 10:17:34.665321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.665363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.665370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:29.074 [2024-11-04 10:17:34.665378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:29.074 [2024-11-04 10:17:34.665384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.665401] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:29.074 [2024-11-04 10:17:34.668243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.668277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:29.074 [2024-11-04 10:17:34.668284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.849 ms 00:17:29.074 [2024-11-04 10:17:34.668292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.668314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.668324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:29.074 [2024-11-04 10:17:34.668330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:29.074 [2024-11-04 10:17:34.668336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.668347] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:29.074 [2024-11-04 10:17:34.668453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:29.074 [2024-11-04 10:17:34.668463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:29.074 [2024-11-04 10:17:34.668473] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:29.074 [2024-11-04 10:17:34.668481] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:29.074 [2024-11-04 10:17:34.668489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:29.074 [2024-11-04 10:17:34.668495] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:29.074 [2024-11-04 10:17:34.668503] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:29.074 [2024-11-04 10:17:34.668509] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:29.074 [2024-11-04 10:17:34.668515] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:29.074 [2024-11-04 10:17:34.668520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.668527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:29.074 [2024-11-04 10:17:34.668533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:17:29.074 [2024-11-04 10:17:34.668543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.668603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.074 [2024-11-04 10:17:34.668610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:29.074 [2024-11-04 10:17:34.668616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:29.074 [2024-11-04 10:17:34.668624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.074 [2024-11-04 10:17:34.668691] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:29.074 [2024-11-04 10:17:34.668699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:29.074 [2024-11-04 10:17:34.668705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.074 [2024-11-04 10:17:34.668712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.074 [2024-11-04 10:17:34.668719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:29.074 [2024-11-04 10:17:34.668725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:29.074 [2024-11-04 10:17:34.668731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:29.074 [2024-11-04 10:17:34.668737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:29.074 [2024-11-04 10:17:34.668742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:29.074 [2024-11-04 10:17:34.668748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.074 [2024-11-04 10:17:34.668753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:29.075 [2024-11-04 10:17:34.668760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:29.075 [2024-11-04 10:17:34.668765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:29.075 [2024-11-04 10:17:34.668776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:29.075 [2024-11-04 10:17:34.668796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:29.075 [2024-11-04 10:17:34.668805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:29.075 [2024-11-04 10:17:34.668817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:29.075 [2024-11-04 10:17:34.668833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668842] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:29.075 [2024-11-04 10:17:34.668853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668864] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:29.075 [2024-11-04 10:17:34.668869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:29.075 [2024-11-04 10:17:34.668887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668899] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:29.075 [2024-11-04 10:17:34.668904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668911] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.075 [2024-11-04 10:17:34.668917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:29.075 [2024-11-04 10:17:34.668923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:29.075 [2024-11-04 10:17:34.668928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:29.075 [2024-11-04 10:17:34.668935] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:29.075 [2024-11-04 10:17:34.668940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:29.075 [2024-11-04 10:17:34.668946] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:29.075 [2024-11-04 10:17:34.668958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:29.075 [2024-11-04 10:17:34.668962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668969] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:29.075 [2024-11-04 10:17:34.668975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:29.075 [2024-11-04 10:17:34.668982] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:29.075 [2024-11-04 10:17:34.668988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:29.075 [2024-11-04 10:17:34.668996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:29.075 [2024-11-04 10:17:34.669001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:29.075 [2024-11-04 10:17:34.669007] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:29.075 [2024-11-04 10:17:34.669012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:29.075 [2024-11-04 10:17:34.669018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:29.075 [2024-11-04 10:17:34.669023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:29.075 [2024-11-04 10:17:34.669032] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:29.075 [2024-11-04 10:17:34.669039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669046] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:29.075 [2024-11-04 10:17:34.669052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:29.075 [2024-11-04 10:17:34.669058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:29.075 [2024-11-04 10:17:34.669063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:29.075 [2024-11-04 10:17:34.669070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:29.075 [2024-11-04 10:17:34.669075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:29.075 [2024-11-04 10:17:34.669082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:29.075 [2024-11-04 10:17:34.669087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:29.075 [2024-11-04 10:17:34.669094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:29.075 [2024-11-04 10:17:34.669099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:29.075 [2024-11-04 10:17:34.669131] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:29.075 [2024-11-04 10:17:34.669137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669145] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:29.075 [2024-11-04 10:17:34.669150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:29.075 [2024-11-04 10:17:34.669158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:29.075 [2024-11-04 10:17:34.669163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:29.075 [2024-11-04 10:17:34.669170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:29.075 [2024-11-04 10:17:34.669175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:29.075 [2024-11-04 10:17:34.669184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:17:29.075 [2024-11-04 10:17:34.669189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:29.075 [2024-11-04 10:17:34.669215] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:29.075 [2024-11-04 10:17:34.669226] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:32.392 [2024-11-04 10:17:37.668206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.668257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:32.392 [2024-11-04 10:17:37.668281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2998.977 ms 00:17:32.392 [2024-11-04 10:17:37.668292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.693751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.693810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:32.392 [2024-11-04 10:17:37.693824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.255 ms 00:17:32.392 [2024-11-04 10:17:37.693832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.693962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.693973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:32.392 [2024-11-04 10:17:37.693985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:32.392 [2024-11-04 10:17:37.693992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.738112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.738153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.392 [2024-11-04 10:17:37.738166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.084 ms 00:17:32.392 [2024-11-04 10:17:37.738174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.738206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.738215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.392 [2024-11-04 10:17:37.738227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:32.392 [2024-11-04 10:17:37.738235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.738589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.738605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.392 [2024-11-04 10:17:37.738617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.303 ms 00:17:32.392 [2024-11-04 10:17:37.738624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.738727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.738741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.392 [2024-11-04 10:17:37.738752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:17:32.392 [2024-11-04 10:17:37.738759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.751848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.751880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.392 [2024-11-04 10:17:37.751892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.073 ms 00:17:32.392 [2024-11-04 10:17:37.751899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.763252] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:32.392 [2024-11-04 10:17:37.768412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.768445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:32.392 [2024-11-04 10:17:37.768455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.447 ms 00:17:32.392 [2024-11-04 10:17:37.768464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.850336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.850491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:32.392 [2024-11-04 10:17:37.850510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 81.852 ms 00:17:32.392 [2024-11-04 10:17:37.850520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.851005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.851036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:32.392 [2024-11-04 10:17:37.851047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:17:32.392 [2024-11-04 10:17:37.851056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.874734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.874771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:32.392 [2024-11-04 10:17:37.874794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.627 ms 00:17:32.392 [2024-11-04 10:17:37.874805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.897695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.897844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:32.392 [2024-11-04 10:17:37.897861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.858 ms 00:17:32.392 [2024-11-04 10:17:37.897870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.898671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.898706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:32.392 [2024-11-04 10:17:37.898717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:32.392 [2024-11-04 10:17:37.898727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.969338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.969479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:32.392 [2024-11-04 10:17:37.969497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.571 ms 00:17:32.392 [2024-11-04 10:17:37.969507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:37.993683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:37.993719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:32.392 [2024-11-04 10:17:37.993729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.119 ms 00:17:32.392 [2024-11-04 10:17:37.993741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:38.016998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:38.017032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:32.392 [2024-11-04 10:17:38.017043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.225 ms 00:17:32.392 [2024-11-04 10:17:38.017052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.392 [2024-11-04 10:17:38.040545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.392 [2024-11-04 10:17:38.040675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:32.392 [2024-11-04 10:17:38.040691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.463 ms 00:17:32.393 [2024-11-04 10:17:38.040701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.393 [2024-11-04 10:17:38.040732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.393 [2024-11-04 10:17:38.040745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:32.393 [2024-11-04 10:17:38.040753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:32.393 [2024-11-04 10:17:38.040762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.393 [2024-11-04 10:17:38.040849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.393 [2024-11-04 10:17:38.040862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:32.393 [2024-11-04 10:17:38.040870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:32.393 [2024-11-04 10:17:38.040879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.393 [2024-11-04 10:17:38.041686] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3382.683 ms, result 0 00:17:32.393 { 00:17:32.393 "name": "ftl0", 00:17:32.393 "uuid": "7b8eb5cd-03b1-42ae-a683-94ece20764b9" 00:17:32.393 } 00:17:32.393 10:17:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:32.393 10:17:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:32.393 10:17:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:32.651 10:17:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:32.651 [2024-11-04 10:17:38.354007] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:32.651 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:32.651 Zero copy mechanism will not be used. 00:17:32.651 Running I/O for 4 seconds... 00:17:34.959 2125.00 IOPS, 141.11 MiB/s [2024-11-04T10:17:41.637Z] 2756.50 IOPS, 183.05 MiB/s [2024-11-04T10:17:42.571Z] 2847.33 IOPS, 189.08 MiB/s [2024-11-04T10:17:42.571Z] 2878.25 IOPS, 191.13 MiB/s 00:17:36.826 Latency(us) 00:17:36.826 [2024-11-04T10:17:42.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.826 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:36.826 ftl0 : 4.00 2877.22 191.07 0.00 0.00 366.64 143.36 2192.94 00:17:36.826 [2024-11-04T10:17:42.571Z] =================================================================================================================== 00:17:36.826 [2024-11-04T10:17:42.571Z] Total : 2877.22 191.07 0.00 0.00 366.64 143.36 2192.94 00:17:36.826 { 00:17:36.826 "results": [ 00:17:36.826 { 00:17:36.826 "job": "ftl0", 00:17:36.826 "core_mask": "0x1", 00:17:36.826 "workload": "randwrite", 00:17:36.826 "status": "finished", 00:17:36.826 "queue_depth": 1, 00:17:36.826 "io_size": 69632, 00:17:36.826 "runtime": 4.001779, 00:17:36.826 "iops": 2877.2203562465593, 00:17:36.826 "mibps": 191.06541428199807, 00:17:36.826 "io_failed": 0, 00:17:36.826 "io_timeout": 0, 00:17:36.826 "avg_latency_us": 366.6410101415, 00:17:36.826 "min_latency_us": 143.36, 00:17:36.826 "max_latency_us": 2192.9353846153845 00:17:36.826 } 00:17:36.826 ], 00:17:36.826 "core_count": 1 00:17:36.826 } 00:17:36.826 [2024-11-04 10:17:42.364096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:36.826 10:17:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:36.826 [2024-11-04 10:17:42.455063] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:36.826 Running I/O for 4 seconds... 00:17:39.134 11208.00 IOPS, 43.78 MiB/s [2024-11-04T10:17:45.832Z] 10675.50 IOPS, 41.70 MiB/s [2024-11-04T10:17:46.771Z] 9734.00 IOPS, 38.02 MiB/s [2024-11-04T10:17:46.771Z] 8814.25 IOPS, 34.43 MiB/s 00:17:41.026 Latency(us) 00:17:41.026 [2024-11-04T10:17:46.771Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.026 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:41.026 ftl0 : 4.03 8775.15 34.28 0.00 0.00 14519.30 222.13 39724.90 00:17:41.026 [2024-11-04T10:17:46.771Z] =================================================================================================================== 00:17:41.026 [2024-11-04T10:17:46.771Z] Total : 8775.15 34.28 0.00 0.00 14519.30 0.00 39724.90 00:17:41.026 { 00:17:41.026 "results": [ 00:17:41.026 { 00:17:41.026 "job": "ftl0", 00:17:41.026 "core_mask": "0x1", 00:17:41.026 "workload": "randwrite", 00:17:41.026 "status": "finished", 00:17:41.026 "queue_depth": 128, 00:17:41.026 "io_size": 4096, 00:17:41.026 "runtime": 4.032408, 00:17:41.026 "iops": 8775.153704684644, 00:17:41.026 "mibps": 34.27794415892439, 00:17:41.026 "io_failed": 0, 00:17:41.026 "io_timeout": 0, 00:17:41.026 "avg_latency_us": 14519.303958174369, 00:17:41.026 "min_latency_us": 222.12923076923076, 00:17:41.026 "max_latency_us": 39724.89846153846 00:17:41.026 } 00:17:41.026 ], 00:17:41.026 "core_count": 1 00:17:41.026 } 00:17:41.026 [2024-11-04 10:17:46.496622] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:41.026 10:17:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:41.026 [2024-11-04 10:17:46.617538] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:41.026 Running I/O for 4 seconds... 00:17:42.926 4306.00 IOPS, 16.82 MiB/s [2024-11-04T10:17:50.054Z] 5424.00 IOPS, 21.19 MiB/s [2024-11-04T10:17:50.993Z] 6110.67 IOPS, 23.87 MiB/s [2024-11-04T10:17:50.993Z] 5911.00 IOPS, 23.09 MiB/s 00:17:45.248 Latency(us) 00:17:45.248 [2024-11-04T10:17:50.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.248 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:45.248 Verification LBA range: start 0x0 length 0x1400000 00:17:45.248 ftl0 : 4.03 5895.03 23.03 0.00 0.00 21643.47 241.03 100018.02 00:17:45.248 [2024-11-04T10:17:50.993Z] =================================================================================================================== 00:17:45.248 [2024-11-04T10:17:50.993Z] Total : 5895.03 23.03 0.00 0.00 21643.47 0.00 100018.02 00:17:45.248 [2024-11-04 10:17:50.667002] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:45.248 "results": [ 00:17:45.248 { 00:17:45.248 "job": "ftl0", 00:17:45.248 "core_mask": "0x1", 00:17:45.248 "workload": "verify", 00:17:45.248 "status": "finished", 00:17:45.248 "verify_range": { 00:17:45.248 "start": 0, 00:17:45.248 "length": 20971520 00:17:45.248 }, 00:17:45.248 "queue_depth": 128, 00:17:45.248 "io_size": 4096, 00:17:45.248 "runtime": 4.032549, 00:17:45.248 "iops": 5895.030661747694, 00:17:45.248 "mibps": 23.027463522451928, 00:17:45.248 "io_failed": 0, 00:17:45.248 "io_timeout": 0, 00:17:45.248 "avg_latency_us": 21643.46743084948, 00:17:45.248 "min_latency_us": 241.03384615384616, 00:17:45.248 "max_latency_us": 100018.01846153846 00:17:45.248 } 00:17:45.248 ], 00:17:45.248 "core_count": 1 00:17:45.248 } 00:17:45.248 l0 00:17:45.248 10:17:50 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:45.248 [2024-11-04 10:17:50.878128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.248 [2024-11-04 10:17:50.878281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.248 [2024-11-04 10:17:50.878300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:45.248 [2024-11-04 10:17:50.878312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.248 [2024-11-04 10:17:50.878337] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.248 [2024-11-04 10:17:50.880968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.248 [2024-11-04 10:17:50.880998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.248 [2024-11-04 10:17:50.881010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.613 ms 00:17:45.248 [2024-11-04 10:17:50.881017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.248 [2024-11-04 10:17:50.883671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.248 [2024-11-04 10:17:50.883704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.248 [2024-11-04 10:17:50.883715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.629 ms 00:17:45.248 [2024-11-04 10:17:50.883723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.048878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.048920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.509 [2024-11-04 10:17:51.048937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 165.134 ms 00:17:45.509 [2024-11-04 10:17:51.048945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.055130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.055157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.509 [2024-11-04 10:17:51.055168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.146 ms 00:17:45.509 [2024-11-04 10:17:51.055175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.079253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.079287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.509 [2024-11-04 10:17:51.079301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.031 ms 00:17:45.509 [2024-11-04 10:17:51.079308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.094920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.094957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.509 [2024-11-04 10:17:51.094973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.576 ms 00:17:45.509 [2024-11-04 10:17:51.094982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.095136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.095147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.509 [2024-11-04 10:17:51.095159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:45.509 [2024-11-04 10:17:51.095167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.118681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.118713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.509 [2024-11-04 10:17:51.118726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.497 ms 00:17:45.509 [2024-11-04 10:17:51.118732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.142104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.142136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.509 [2024-11-04 10:17:51.142148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.336 ms 00:17:45.509 [2024-11-04 10:17:51.142155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.165039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.165070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.509 [2024-11-04 10:17:51.165082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.841 ms 00:17:45.509 [2024-11-04 10:17:51.165089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.187922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.509 [2024-11-04 10:17:51.188053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.509 [2024-11-04 10:17:51.188075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.765 ms 00:17:45.509 [2024-11-04 10:17:51.188082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.509 [2024-11-04 10:17:51.188111] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.509 [2024-11-04 10:17:51.188125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.509 [2024-11-04 10:17:51.188193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.188992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.189002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.510 [2024-11-04 10:17:51.189009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.511 [2024-11-04 10:17:51.189019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.511 [2024-11-04 10:17:51.189034] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.511 [2024-11-04 10:17:51.189044] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7b8eb5cd-03b1-42ae-a683-94ece20764b9 00:17:45.511 [2024-11-04 10:17:51.189052] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.511 [2024-11-04 10:17:51.189060] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.511 [2024-11-04 10:17:51.189067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.511 [2024-11-04 10:17:51.189078] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.511 [2024-11-04 10:17:51.189085] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.511 [2024-11-04 10:17:51.189100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.511 [2024-11-04 10:17:51.189108] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.511 [2024-11-04 10:17:51.189117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.511 [2024-11-04 10:17:51.189124] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.511 [2024-11-04 10:17:51.189132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.511 [2024-11-04 10:17:51.189139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.511 [2024-11-04 10:17:51.189148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:17:45.511 [2024-11-04 10:17:51.189155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.201750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.511 [2024-11-04 10:17:51.201802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:45.511 [2024-11-04 10:17:51.201814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.562 ms 00:17:45.511 [2024-11-04 10:17:51.201821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.202176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.511 [2024-11-04 10:17:51.202191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:45.511 [2024-11-04 10:17:51.202201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.323 ms 00:17:45.511 [2024-11-04 10:17:51.202208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.237793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.511 [2024-11-04 10:17:51.237828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:45.511 [2024-11-04 10:17:51.237841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.511 [2024-11-04 10:17:51.237849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.237905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.511 [2024-11-04 10:17:51.237913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:45.511 [2024-11-04 10:17:51.237923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.511 [2024-11-04 10:17:51.237930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.237994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.511 [2024-11-04 10:17:51.238006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:45.511 [2024-11-04 10:17:51.238015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.511 [2024-11-04 10:17:51.238022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.511 [2024-11-04 10:17:51.238038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.511 [2024-11-04 10:17:51.238046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:45.511 [2024-11-04 10:17:51.238055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.511 [2024-11-04 10:17:51.238062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.772 [2024-11-04 10:17:51.317559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.772 [2024-11-04 10:17:51.317608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:45.772 [2024-11-04 10:17:51.317623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.772 [2024-11-04 10:17:51.317631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.772 [2024-11-04 10:17:51.384353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.772 [2024-11-04 10:17:51.384408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:45.772 [2024-11-04 10:17:51.384423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.772 [2024-11-04 10:17:51.384432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.772 [2024-11-04 10:17:51.384533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.772 [2024-11-04 10:17:51.384545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:45.772 [2024-11-04 10:17:51.384558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.772 [2024-11-04 10:17:51.384567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.384613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.773 [2024-11-04 10:17:51.384623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:45.773 [2024-11-04 10:17:51.384633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.773 [2024-11-04 10:17:51.384641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.384740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.773 [2024-11-04 10:17:51.384751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:45.773 [2024-11-04 10:17:51.384765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.773 [2024-11-04 10:17:51.384775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.384846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.773 [2024-11-04 10:17:51.384856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:45.773 [2024-11-04 10:17:51.384866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.773 [2024-11-04 10:17:51.384874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.384914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.773 [2024-11-04 10:17:51.384923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:45.773 [2024-11-04 10:17:51.384934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.773 [2024-11-04 10:17:51.384942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.384992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:45.773 [2024-11-04 10:17:51.385009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:45.773 [2024-11-04 10:17:51.385020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:45.773 [2024-11-04 10:17:51.385027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.773 [2024-11-04 10:17:51.385162] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 506.986 ms, result 0 00:17:45.773 true 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73201 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73201 ']' 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73201 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73201 00:17:45.773 killing process with pid 73201 00:17:45.773 Received shutdown signal, test time was about 4.000000 seconds 00:17:45.773 00:17:45.773 Latency(us) 00:17:45.773 [2024-11-04T10:17:51.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:45.773 [2024-11-04T10:17:51.518Z] =================================================================================================================== 00:17:45.773 [2024-11-04T10:17:51.518Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73201' 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73201 00:17:45.773 10:17:51 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73201 00:17:49.972 Remove shared memory files 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:49.972 ************************************ 00:17:49.972 END TEST ftl_bdevperf 00:17:49.972 ************************************ 00:17:49.972 00:17:49.972 real 0m24.086s 00:17:49.972 user 0m26.737s 00:17:49.972 sys 0m0.854s 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:49.972 10:17:54 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 10:17:54 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:49.972 10:17:54 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:49.972 10:17:54 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:49.972 10:17:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:49.972 ************************************ 00:17:49.972 START TEST ftl_trim 00:17:49.972 ************************************ 00:17:49.972 10:17:54 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:49.972 * Looking for test storage... 00:17:49.972 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.972 10:17:54 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:49.972 10:17:54 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:17:49.972 10:17:54 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:49.972 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:49.972 10:17:55 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:49.972 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:49.972 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:49.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.972 --rc genhtml_branch_coverage=1 00:17:49.972 --rc genhtml_function_coverage=1 00:17:49.972 --rc genhtml_legend=1 00:17:49.972 --rc geninfo_all_blocks=1 00:17:49.972 --rc geninfo_unexecuted_blocks=1 00:17:49.972 00:17:49.972 ' 00:17:49.972 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:49.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.972 --rc genhtml_branch_coverage=1 00:17:49.972 --rc genhtml_function_coverage=1 00:17:49.972 --rc genhtml_legend=1 00:17:49.972 --rc geninfo_all_blocks=1 00:17:49.972 --rc geninfo_unexecuted_blocks=1 00:17:49.972 00:17:49.972 ' 00:17:49.972 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:49.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.973 --rc genhtml_branch_coverage=1 00:17:49.973 --rc genhtml_function_coverage=1 00:17:49.973 --rc genhtml_legend=1 00:17:49.973 --rc geninfo_all_blocks=1 00:17:49.973 --rc geninfo_unexecuted_blocks=1 00:17:49.973 00:17:49.973 ' 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:49.973 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:49.973 --rc genhtml_branch_coverage=1 00:17:49.973 --rc genhtml_function_coverage=1 00:17:49.973 --rc genhtml_legend=1 00:17:49.973 --rc geninfo_all_blocks=1 00:17:49.973 --rc geninfo_unexecuted_blocks=1 00:17:49.973 00:17:49.973 ' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73550 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:49.973 10:17:55 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73550 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73550 ']' 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.973 10:17:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:49.973 [2024-11-04 10:17:55.148276] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:17:49.973 [2024-11-04 10:17:55.148569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73550 ] 00:17:49.973 [2024-11-04 10:17:55.312515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:49.973 [2024-11-04 10:17:55.426343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:49.973 [2024-11-04 10:17:55.426587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.973 [2024-11-04 10:17:55.426654] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.538 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.539 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:50.539 10:17:56 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:50.796 10:17:56 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:50.796 10:17:56 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:50.796 10:17:56 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:50.796 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:50.796 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:50.796 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:50.796 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:50.796 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:51.054 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:51.054 { 00:17:51.054 "name": "nvme0n1", 00:17:51.054 "aliases": [ 00:17:51.054 "0cad044f-2e63-4ab9-89cb-5befc5614ff9" 00:17:51.054 ], 00:17:51.054 "product_name": "NVMe disk", 00:17:51.054 "block_size": 4096, 00:17:51.054 "num_blocks": 1310720, 00:17:51.054 "uuid": "0cad044f-2e63-4ab9-89cb-5befc5614ff9", 00:17:51.054 "numa_id": -1, 00:17:51.054 "assigned_rate_limits": { 00:17:51.054 "rw_ios_per_sec": 0, 00:17:51.054 "rw_mbytes_per_sec": 0, 00:17:51.054 "r_mbytes_per_sec": 0, 00:17:51.054 "w_mbytes_per_sec": 0 00:17:51.054 }, 00:17:51.054 "claimed": true, 00:17:51.054 "claim_type": "read_many_write_one", 00:17:51.054 "zoned": false, 00:17:51.054 "supported_io_types": { 00:17:51.054 "read": true, 00:17:51.054 "write": true, 00:17:51.054 "unmap": true, 00:17:51.054 "flush": true, 00:17:51.054 "reset": true, 00:17:51.054 "nvme_admin": true, 00:17:51.054 "nvme_io": true, 00:17:51.054 "nvme_io_md": false, 00:17:51.054 "write_zeroes": true, 00:17:51.054 "zcopy": false, 00:17:51.054 "get_zone_info": false, 00:17:51.054 "zone_management": false, 00:17:51.054 "zone_append": false, 00:17:51.054 "compare": true, 00:17:51.054 "compare_and_write": false, 00:17:51.054 "abort": true, 00:17:51.054 "seek_hole": false, 00:17:51.054 "seek_data": false, 00:17:51.054 "copy": true, 00:17:51.054 "nvme_iov_md": false 00:17:51.054 }, 00:17:51.054 "driver_specific": { 00:17:51.054 "nvme": [ 00:17:51.054 { 00:17:51.054 "pci_address": "0000:00:11.0", 00:17:51.054 "trid": { 00:17:51.054 "trtype": "PCIe", 00:17:51.054 "traddr": "0000:00:11.0" 00:17:51.054 }, 00:17:51.054 "ctrlr_data": { 00:17:51.054 "cntlid": 0, 00:17:51.054 "vendor_id": "0x1b36", 00:17:51.054 "model_number": "QEMU NVMe Ctrl", 00:17:51.054 "serial_number": "12341", 00:17:51.054 "firmware_revision": "8.0.0", 00:17:51.054 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:51.054 "oacs": { 00:17:51.054 "security": 0, 00:17:51.054 "format": 1, 00:17:51.054 "firmware": 0, 00:17:51.054 "ns_manage": 1 00:17:51.054 }, 00:17:51.054 "multi_ctrlr": false, 00:17:51.054 "ana_reporting": false 00:17:51.054 }, 00:17:51.054 "vs": { 00:17:51.054 "nvme_version": "1.4" 00:17:51.054 }, 00:17:51.054 "ns_data": { 00:17:51.054 "id": 1, 00:17:51.054 "can_share": false 00:17:51.055 } 00:17:51.055 } 00:17:51.055 ], 00:17:51.055 "mp_policy": "active_passive" 00:17:51.055 } 00:17:51.055 } 00:17:51.055 ]' 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:51.055 10:17:56 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:17:51.055 10:17:56 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:51.055 10:17:56 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:51.055 10:17:56 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:51.055 10:17:56 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:51.055 10:17:56 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:51.312 10:17:56 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=0e27b6a8-04ad-4709-8bfc-946438d33a50 00:17:51.312 10:17:56 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:51.312 10:17:56 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e27b6a8-04ad-4709-8bfc-946438d33a50 00:17:51.312 10:17:57 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:51.570 10:17:57 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=a8e279a0-d73d-4fc0-bba6-a4e4834bdca7 00:17:51.570 10:17:57 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a8e279a0-d73d-4fc0-bba6-a4e4834bdca7 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:51.828 10:17:57 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:51.828 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:51.828 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:51.828 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:51.828 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:51.828 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:52.089 { 00:17:52.089 "name": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.089 "aliases": [ 00:17:52.089 "lvs/nvme0n1p0" 00:17:52.089 ], 00:17:52.089 "product_name": "Logical Volume", 00:17:52.089 "block_size": 4096, 00:17:52.089 "num_blocks": 26476544, 00:17:52.089 "uuid": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.089 "assigned_rate_limits": { 00:17:52.089 "rw_ios_per_sec": 0, 00:17:52.089 "rw_mbytes_per_sec": 0, 00:17:52.089 "r_mbytes_per_sec": 0, 00:17:52.089 "w_mbytes_per_sec": 0 00:17:52.089 }, 00:17:52.089 "claimed": false, 00:17:52.089 "zoned": false, 00:17:52.089 "supported_io_types": { 00:17:52.089 "read": true, 00:17:52.089 "write": true, 00:17:52.089 "unmap": true, 00:17:52.089 "flush": false, 00:17:52.089 "reset": true, 00:17:52.089 "nvme_admin": false, 00:17:52.089 "nvme_io": false, 00:17:52.089 "nvme_io_md": false, 00:17:52.089 "write_zeroes": true, 00:17:52.089 "zcopy": false, 00:17:52.089 "get_zone_info": false, 00:17:52.089 "zone_management": false, 00:17:52.089 "zone_append": false, 00:17:52.089 "compare": false, 00:17:52.089 "compare_and_write": false, 00:17:52.089 "abort": false, 00:17:52.089 "seek_hole": true, 00:17:52.089 "seek_data": true, 00:17:52.089 "copy": false, 00:17:52.089 "nvme_iov_md": false 00:17:52.089 }, 00:17:52.089 "driver_specific": { 00:17:52.089 "lvol": { 00:17:52.089 "lvol_store_uuid": "a8e279a0-d73d-4fc0-bba6-a4e4834bdca7", 00:17:52.089 "base_bdev": "nvme0n1", 00:17:52.089 "thin_provision": true, 00:17:52.089 "num_allocated_clusters": 0, 00:17:52.089 "snapshot": false, 00:17:52.089 "clone": false, 00:17:52.089 "esnap_clone": false 00:17:52.089 } 00:17:52.089 } 00:17:52.089 } 00:17:52.089 ]' 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:52.089 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:52.089 10:17:57 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:52.089 10:17:57 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:52.089 10:17:57 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:52.364 10:17:57 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:52.364 10:17:57 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:52.364 10:17:57 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.364 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.364 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:52.364 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:52.364 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:52.364 10:17:57 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.637 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:52.637 { 00:17:52.637 "name": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.637 "aliases": [ 00:17:52.637 "lvs/nvme0n1p0" 00:17:52.637 ], 00:17:52.637 "product_name": "Logical Volume", 00:17:52.637 "block_size": 4096, 00:17:52.637 "num_blocks": 26476544, 00:17:52.637 "uuid": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.637 "assigned_rate_limits": { 00:17:52.637 "rw_ios_per_sec": 0, 00:17:52.637 "rw_mbytes_per_sec": 0, 00:17:52.637 "r_mbytes_per_sec": 0, 00:17:52.637 "w_mbytes_per_sec": 0 00:17:52.637 }, 00:17:52.637 "claimed": false, 00:17:52.637 "zoned": false, 00:17:52.637 "supported_io_types": { 00:17:52.637 "read": true, 00:17:52.637 "write": true, 00:17:52.637 "unmap": true, 00:17:52.637 "flush": false, 00:17:52.637 "reset": true, 00:17:52.637 "nvme_admin": false, 00:17:52.637 "nvme_io": false, 00:17:52.637 "nvme_io_md": false, 00:17:52.637 "write_zeroes": true, 00:17:52.637 "zcopy": false, 00:17:52.637 "get_zone_info": false, 00:17:52.637 "zone_management": false, 00:17:52.637 "zone_append": false, 00:17:52.637 "compare": false, 00:17:52.637 "compare_and_write": false, 00:17:52.637 "abort": false, 00:17:52.637 "seek_hole": true, 00:17:52.637 "seek_data": true, 00:17:52.637 "copy": false, 00:17:52.637 "nvme_iov_md": false 00:17:52.637 }, 00:17:52.637 "driver_specific": { 00:17:52.637 "lvol": { 00:17:52.638 "lvol_store_uuid": "a8e279a0-d73d-4fc0-bba6-a4e4834bdca7", 00:17:52.638 "base_bdev": "nvme0n1", 00:17:52.638 "thin_provision": true, 00:17:52.638 "num_allocated_clusters": 0, 00:17:52.638 "snapshot": false, 00:17:52.638 "clone": false, 00:17:52.638 "esnap_clone": false 00:17:52.638 } 00:17:52.638 } 00:17:52.638 } 00:17:52.638 ]' 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:52.638 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:52.638 10:17:58 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:52.638 10:17:58 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:52.896 10:17:58 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:52.896 10:17:58 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:52.896 10:17:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b f8e38a12-96fa-4966-9f35-77fd2d87a97b 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:52.896 { 00:17:52.896 "name": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.896 "aliases": [ 00:17:52.896 "lvs/nvme0n1p0" 00:17:52.896 ], 00:17:52.896 "product_name": "Logical Volume", 00:17:52.896 "block_size": 4096, 00:17:52.896 "num_blocks": 26476544, 00:17:52.896 "uuid": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:52.896 "assigned_rate_limits": { 00:17:52.896 "rw_ios_per_sec": 0, 00:17:52.896 "rw_mbytes_per_sec": 0, 00:17:52.896 "r_mbytes_per_sec": 0, 00:17:52.896 "w_mbytes_per_sec": 0 00:17:52.896 }, 00:17:52.896 "claimed": false, 00:17:52.896 "zoned": false, 00:17:52.896 "supported_io_types": { 00:17:52.896 "read": true, 00:17:52.896 "write": true, 00:17:52.896 "unmap": true, 00:17:52.896 "flush": false, 00:17:52.896 "reset": true, 00:17:52.896 "nvme_admin": false, 00:17:52.896 "nvme_io": false, 00:17:52.896 "nvme_io_md": false, 00:17:52.896 "write_zeroes": true, 00:17:52.896 "zcopy": false, 00:17:52.896 "get_zone_info": false, 00:17:52.896 "zone_management": false, 00:17:52.896 "zone_append": false, 00:17:52.896 "compare": false, 00:17:52.896 "compare_and_write": false, 00:17:52.896 "abort": false, 00:17:52.896 "seek_hole": true, 00:17:52.896 "seek_data": true, 00:17:52.896 "copy": false, 00:17:52.896 "nvme_iov_md": false 00:17:52.896 }, 00:17:52.896 "driver_specific": { 00:17:52.896 "lvol": { 00:17:52.896 "lvol_store_uuid": "a8e279a0-d73d-4fc0-bba6-a4e4834bdca7", 00:17:52.896 "base_bdev": "nvme0n1", 00:17:52.896 "thin_provision": true, 00:17:52.896 "num_allocated_clusters": 0, 00:17:52.896 "snapshot": false, 00:17:52.896 "clone": false, 00:17:52.896 "esnap_clone": false 00:17:52.896 } 00:17:52.896 } 00:17:52.896 } 00:17:52.896 ]' 00:17:52.896 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:53.155 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:53.155 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:53.155 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:53.155 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:53.155 10:17:58 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:53.155 10:17:58 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:53.155 10:17:58 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d f8e38a12-96fa-4966-9f35-77fd2d87a97b -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:53.155 [2024-11-04 10:17:58.865447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.865611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:53.155 [2024-11-04 10:17:58.865677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:53.155 [2024-11-04 10:17:58.865701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.868509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.868621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.155 [2024-11-04 10:17:58.868678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.766 ms 00:17:53.155 [2024-11-04 10:17:58.868701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.869223] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:53.155 [2024-11-04 10:17:58.869993] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:53.155 [2024-11-04 10:17:58.870095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.870172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.155 [2024-11-04 10:17:58.870189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.888 ms 00:17:53.155 [2024-11-04 10:17:58.870197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.870466] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:17:53.155 [2024-11-04 10:17:58.871531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.871558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:53.155 [2024-11-04 10:17:58.871569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:53.155 [2024-11-04 10:17:58.871578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.876966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.877064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.155 [2024-11-04 10:17:58.877114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.333 ms 00:17:53.155 [2024-11-04 10:17:58.877138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.877267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.877300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.155 [2024-11-04 10:17:58.877321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:53.155 [2024-11-04 10:17:58.877345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.877446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.877473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:53.155 [2024-11-04 10:17:58.877494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:53.155 [2024-11-04 10:17:58.877514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.877553] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:53.155 [2024-11-04 10:17:58.881224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.881324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.155 [2024-11-04 10:17:58.881376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.674 ms 00:17:53.155 [2024-11-04 10:17:58.881400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.881493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.881520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:53.155 [2024-11-04 10:17:58.881544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:53.155 [2024-11-04 10:17:58.881613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.881665] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:53.155 [2024-11-04 10:17:58.882189] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:53.155 [2024-11-04 10:17:58.882292] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:53.155 [2024-11-04 10:17:58.882354] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:53.155 [2024-11-04 10:17:58.882390] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:53.155 [2024-11-04 10:17:58.882444] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:53.155 [2024-11-04 10:17:58.882476] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:53.155 [2024-11-04 10:17:58.882495] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:53.155 [2024-11-04 10:17:58.882593] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:53.155 [2024-11-04 10:17:58.882623] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:53.155 [2024-11-04 10:17:58.882645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.882668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:53.155 [2024-11-04 10:17:58.882690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:53.155 [2024-11-04 10:17:58.882709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.882845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.155 [2024-11-04 10:17:58.882868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:53.155 [2024-11-04 10:17:58.882940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:53.155 [2024-11-04 10:17:58.882962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.155 [2024-11-04 10:17:58.883096] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:53.155 [2024-11-04 10:17:58.883119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:53.155 [2024-11-04 10:17:58.883142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.155 [2024-11-04 10:17:58.883161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:53.155 [2024-11-04 10:17:58.883200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:53.155 [2024-11-04 10:17:58.883274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:53.155 [2024-11-04 10:17:58.883294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883312] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.155 [2024-11-04 10:17:58.883357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:53.155 [2024-11-04 10:17:58.883380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:53.155 [2024-11-04 10:17:58.883400] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.155 [2024-11-04 10:17:58.883418] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:53.155 [2024-11-04 10:17:58.883438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:53.155 [2024-11-04 10:17:58.883482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:53.155 [2024-11-04 10:17:58.883526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:53.155 [2024-11-04 10:17:58.883545] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883564] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:53.155 [2024-11-04 10:17:58.883591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:53.155 [2024-11-04 10:17:58.883610] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.155 [2024-11-04 10:17:58.883629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:53.155 [2024-11-04 10:17:58.883647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:53.156 [2024-11-04 10:17:58.883667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.156 [2024-11-04 10:17:58.883685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:53.156 [2024-11-04 10:17:58.883739] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:53.156 [2024-11-04 10:17:58.883760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.156 [2024-11-04 10:17:58.883790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:53.156 [2024-11-04 10:17:58.883810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:53.156 [2024-11-04 10:17:58.883830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.156 [2024-11-04 10:17:58.884294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:53.156 [2024-11-04 10:17:58.884343] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:53.156 [2024-11-04 10:17:58.884365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.156 [2024-11-04 10:17:58.884386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:53.156 [2024-11-04 10:17:58.884404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:53.156 [2024-11-04 10:17:58.884424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.156 [2024-11-04 10:17:58.884484] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:53.156 [2024-11-04 10:17:58.884510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:53.156 [2024-11-04 10:17:58.884528] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.156 [2024-11-04 10:17:58.884549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:53.156 [2024-11-04 10:17:58.884591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:53.156 [2024-11-04 10:17:58.884614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.156 [2024-11-04 10:17:58.884633] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:53.156 [2024-11-04 10:17:58.884654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:53.156 [2024-11-04 10:17:58.884673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.156 [2024-11-04 10:17:58.884713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.156 [2024-11-04 10:17:58.884735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:53.156 [2024-11-04 10:17:58.884759] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:53.156 [2024-11-04 10:17:58.884778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:53.156 [2024-11-04 10:17:58.884848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:53.156 [2024-11-04 10:17:58.884870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:53.156 [2024-11-04 10:17:58.884891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:53.156 [2024-11-04 10:17:58.884913] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:53.156 [2024-11-04 10:17:58.884973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:53.156 [2024-11-04 10:17:58.885096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:53.156 [2024-11-04 10:17:58.885136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:53.156 [2024-11-04 10:17:58.885147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:53.156 [2024-11-04 10:17:58.885154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:53.156 [2024-11-04 10:17:58.885163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:53.156 [2024-11-04 10:17:58.885170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:53.156 [2024-11-04 10:17:58.885179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:53.156 [2024-11-04 10:17:58.885185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:53.156 [2024-11-04 10:17:58.885196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885203] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:53.156 [2024-11-04 10:17:58.885234] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:53.156 [2024-11-04 10:17:58.885244] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:53.156 [2024-11-04 10:17:58.885262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:53.156 [2024-11-04 10:17:58.885269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:53.156 [2024-11-04 10:17:58.885277] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:53.156 [2024-11-04 10:17:58.885287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.156 [2024-11-04 10:17:58.885301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:53.156 [2024-11-04 10:17:58.885309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:17:53.156 [2024-11-04 10:17:58.885318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.156 [2024-11-04 10:17:58.885398] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:53.156 [2024-11-04 10:17:58.885412] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:56.436 [2024-11-04 10:18:01.599237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.599425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:56.436 [2024-11-04 10:18:01.599492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2713.829 ms 00:17:56.436 [2024-11-04 10:18:01.599518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.624435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.624569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.436 [2024-11-04 10:18:01.624620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.629 ms 00:17:56.436 [2024-11-04 10:18:01.624647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.624810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.624885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:56.436 [2024-11-04 10:18:01.624909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:17:56.436 [2024-11-04 10:18:01.624932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.666565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.666711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:56.436 [2024-11-04 10:18:01.666770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.553 ms 00:17:56.436 [2024-11-04 10:18:01.666820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.666906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.666935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:56.436 [2024-11-04 10:18:01.666955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:56.436 [2024-11-04 10:18:01.667022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.667341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.667424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:56.436 [2024-11-04 10:18:01.667472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:17:56.436 [2024-11-04 10:18:01.667499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.667663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.667723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:56.436 [2024-11-04 10:18:01.667772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:17:56.436 [2024-11-04 10:18:01.667810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.685265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.685361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:56.436 [2024-11-04 10:18:01.685409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.404 ms 00:17:56.436 [2024-11-04 10:18:01.685434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.696731] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:56.436 [2024-11-04 10:18:01.710506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.710605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:56.436 [2024-11-04 10:18:01.710656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.964 ms 00:17:56.436 [2024-11-04 10:18:01.710681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.784647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.784796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:56.436 [2024-11-04 10:18:01.784859] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.890 ms 00:17:56.436 [2024-11-04 10:18:01.784886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.436 [2024-11-04 10:18:01.785100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.436 [2024-11-04 10:18:01.785129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:56.437 [2024-11-04 10:18:01.785200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:17:56.437 [2024-11-04 10:18:01.785222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.807869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.807975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:56.437 [2024-11-04 10:18:01.807995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.603 ms 00:17:56.437 [2024-11-04 10:18:01.808003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.830191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.830222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:56.437 [2024-11-04 10:18:01.830234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.135 ms 00:17:56.437 [2024-11-04 10:18:01.830241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.830839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.830856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:56.437 [2024-11-04 10:18:01.830866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:17:56.437 [2024-11-04 10:18:01.830874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.903162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.903276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:56.437 [2024-11-04 10:18:01.903297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.256 ms 00:17:56.437 [2024-11-04 10:18:01.903307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.927212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.927246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:56.437 [2024-11-04 10:18:01.927258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.819 ms 00:17:56.437 [2024-11-04 10:18:01.927267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.950460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.950571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:56.437 [2024-11-04 10:18:01.950589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.137 ms 00:17:56.437 [2024-11-04 10:18:01.950597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.973976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.974009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:56.437 [2024-11-04 10:18:01.974022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.313 ms 00:17:56.437 [2024-11-04 10:18:01.974041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.974102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.974112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:56.437 [2024-11-04 10:18:01.974125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:56.437 [2024-11-04 10:18:01.974134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.974209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:56.437 [2024-11-04 10:18:01.974217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:56.437 [2024-11-04 10:18:01.974226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:56.437 [2024-11-04 10:18:01.974233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.437 [2024-11-04 10:18:01.975143] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:56.437 [2024-11-04 10:18:01.978091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3109.420 ms, result 0 00:17:56.437 [2024-11-04 10:18:01.978840] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:56.437 { 00:17:56.437 "name": "ftl0", 00:17:56.437 "uuid": "c6ad6e55-f413-4941-b6dd-4460bc0cd26d" 00:17:56.437 } 00:17:56.437 10:18:01 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:56.437 10:18:01 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.696 10:18:02 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:56.696 [ 00:17:56.696 { 00:17:56.696 "name": "ftl0", 00:17:56.696 "aliases": [ 00:17:56.696 "c6ad6e55-f413-4941-b6dd-4460bc0cd26d" 00:17:56.696 ], 00:17:56.696 "product_name": "FTL disk", 00:17:56.696 "block_size": 4096, 00:17:56.696 "num_blocks": 23592960, 00:17:56.696 "uuid": "c6ad6e55-f413-4941-b6dd-4460bc0cd26d", 00:17:56.696 "assigned_rate_limits": { 00:17:56.696 "rw_ios_per_sec": 0, 00:17:56.696 "rw_mbytes_per_sec": 0, 00:17:56.696 "r_mbytes_per_sec": 0, 00:17:56.696 "w_mbytes_per_sec": 0 00:17:56.696 }, 00:17:56.696 "claimed": false, 00:17:56.696 "zoned": false, 00:17:56.696 "supported_io_types": { 00:17:56.696 "read": true, 00:17:56.696 "write": true, 00:17:56.696 "unmap": true, 00:17:56.696 "flush": true, 00:17:56.696 "reset": false, 00:17:56.696 "nvme_admin": false, 00:17:56.696 "nvme_io": false, 00:17:56.696 "nvme_io_md": false, 00:17:56.696 "write_zeroes": true, 00:17:56.696 "zcopy": false, 00:17:56.696 "get_zone_info": false, 00:17:56.696 "zone_management": false, 00:17:56.696 "zone_append": false, 00:17:56.696 "compare": false, 00:17:56.696 "compare_and_write": false, 00:17:56.696 "abort": false, 00:17:56.696 "seek_hole": false, 00:17:56.696 "seek_data": false, 00:17:56.696 "copy": false, 00:17:56.696 "nvme_iov_md": false 00:17:56.696 }, 00:17:56.696 "driver_specific": { 00:17:56.696 "ftl": { 00:17:56.696 "base_bdev": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:56.696 "cache": "nvc0n1p0" 00:17:56.696 } 00:17:56.696 } 00:17:56.696 } 00:17:56.696 ] 00:17:56.696 10:18:02 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:56.696 10:18:02 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:56.696 10:18:02 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:56.953 10:18:02 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:56.953 10:18:02 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:57.211 10:18:02 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:57.211 { 00:17:57.211 "name": "ftl0", 00:17:57.211 "aliases": [ 00:17:57.211 "c6ad6e55-f413-4941-b6dd-4460bc0cd26d" 00:17:57.211 ], 00:17:57.211 "product_name": "FTL disk", 00:17:57.211 "block_size": 4096, 00:17:57.211 "num_blocks": 23592960, 00:17:57.211 "uuid": "c6ad6e55-f413-4941-b6dd-4460bc0cd26d", 00:17:57.211 "assigned_rate_limits": { 00:17:57.211 "rw_ios_per_sec": 0, 00:17:57.211 "rw_mbytes_per_sec": 0, 00:17:57.211 "r_mbytes_per_sec": 0, 00:17:57.211 "w_mbytes_per_sec": 0 00:17:57.211 }, 00:17:57.211 "claimed": false, 00:17:57.211 "zoned": false, 00:17:57.211 "supported_io_types": { 00:17:57.211 "read": true, 00:17:57.211 "write": true, 00:17:57.211 "unmap": true, 00:17:57.211 "flush": true, 00:17:57.211 "reset": false, 00:17:57.211 "nvme_admin": false, 00:17:57.211 "nvme_io": false, 00:17:57.211 "nvme_io_md": false, 00:17:57.211 "write_zeroes": true, 00:17:57.211 "zcopy": false, 00:17:57.211 "get_zone_info": false, 00:17:57.211 "zone_management": false, 00:17:57.211 "zone_append": false, 00:17:57.211 "compare": false, 00:17:57.211 "compare_and_write": false, 00:17:57.211 "abort": false, 00:17:57.211 "seek_hole": false, 00:17:57.211 "seek_data": false, 00:17:57.211 "copy": false, 00:17:57.211 "nvme_iov_md": false 00:17:57.211 }, 00:17:57.211 "driver_specific": { 00:17:57.211 "ftl": { 00:17:57.211 "base_bdev": "f8e38a12-96fa-4966-9f35-77fd2d87a97b", 00:17:57.211 "cache": "nvc0n1p0" 00:17:57.211 } 00:17:57.211 } 00:17:57.211 } 00:17:57.211 ]' 00:17:57.211 10:18:02 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:57.211 10:18:02 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:57.211 10:18:02 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:57.475 [2024-11-04 10:18:03.010120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.010166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:57.476 [2024-11-04 10:18:03.010179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.476 [2024-11-04 10:18:03.010189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.010220] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:57.476 [2024-11-04 10:18:03.012830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.012962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:57.476 [2024-11-04 10:18:03.012985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.592 ms 00:17:57.476 [2024-11-04 10:18:03.012993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.013468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.013483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:57.476 [2024-11-04 10:18:03.013494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.435 ms 00:17:57.476 [2024-11-04 10:18:03.013502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.017151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.017173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:57.476 [2024-11-04 10:18:03.017185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.618 ms 00:17:57.476 [2024-11-04 10:18:03.017196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.024302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.024399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:57.476 [2024-11-04 10:18:03.024416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.064 ms 00:17:57.476 [2024-11-04 10:18:03.024423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.047328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.047434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:57.476 [2024-11-04 10:18:03.047455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.817 ms 00:17:57.476 [2024-11-04 10:18:03.047462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.062280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.062383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:57.476 [2024-11-04 10:18:03.062402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.764 ms 00:17:57.476 [2024-11-04 10:18:03.062410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.062598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.062610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:57.476 [2024-11-04 10:18:03.062620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:17:57.476 [2024-11-04 10:18:03.062627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.085658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.085757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:57.476 [2024-11-04 10:18:03.085774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.006 ms 00:17:57.476 [2024-11-04 10:18:03.085801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.108219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.108334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:57.476 [2024-11-04 10:18:03.108354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.369 ms 00:17:57.476 [2024-11-04 10:18:03.108361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.130051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.130080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:57.476 [2024-11-04 10:18:03.130092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.626 ms 00:17:57.476 [2024-11-04 10:18:03.130099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.152315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.152354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:57.476 [2024-11-04 10:18:03.152366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.121 ms 00:17:57.476 [2024-11-04 10:18:03.152373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.152428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:57.476 [2024-11-04 10:18:03.152443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.152997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:57.476 [2024-11-04 10:18:03.153318] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:57.476 [2024-11-04 10:18:03.153329] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:17:57.476 [2024-11-04 10:18:03.153337] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:57.476 [2024-11-04 10:18:03.153346] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:57.476 [2024-11-04 10:18:03.153353] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:57.476 [2024-11-04 10:18:03.153361] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:57.476 [2024-11-04 10:18:03.153368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:57.476 [2024-11-04 10:18:03.153377] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:57.476 [2024-11-04 10:18:03.153386] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:57.476 [2024-11-04 10:18:03.153393] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:57.476 [2024-11-04 10:18:03.153400] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:57.476 [2024-11-04 10:18:03.153408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.153416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:57.476 [2024-11-04 10:18:03.153425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.982 ms 00:17:57.476 [2024-11-04 10:18:03.153432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.165726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.165754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:57.476 [2024-11-04 10:18:03.165768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.262 ms 00:17:57.476 [2024-11-04 10:18:03.165777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.166161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.476 [2024-11-04 10:18:03.166179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:57.476 [2024-11-04 10:18:03.166189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:57.476 [2024-11-04 10:18:03.166196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.209445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.476 [2024-11-04 10:18:03.209479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.476 [2024-11-04 10:18:03.209492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.476 [2024-11-04 10:18:03.209502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.476 [2024-11-04 10:18:03.209594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.476 [2024-11-04 10:18:03.209603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.476 [2024-11-04 10:18:03.209612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.476 [2024-11-04 10:18:03.209620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.477 [2024-11-04 10:18:03.209674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.477 [2024-11-04 10:18:03.209683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.477 [2024-11-04 10:18:03.209694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.477 [2024-11-04 10:18:03.209701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.477 [2024-11-04 10:18:03.209732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.477 [2024-11-04 10:18:03.209739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.477 [2024-11-04 10:18:03.209748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.477 [2024-11-04 10:18:03.209755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.290313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.290357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.734 [2024-11-04 10:18:03.290369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.290376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.734 [2024-11-04 10:18:03.353314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.734 [2024-11-04 10:18:03.353429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.734 [2024-11-04 10:18:03.353505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.734 [2024-11-04 10:18:03.353636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:57.734 [2024-11-04 10:18:03.353707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.734 [2024-11-04 10:18:03.353804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.353860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:57.734 [2024-11-04 10:18:03.353871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.734 [2024-11-04 10:18:03.353881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:57.734 [2024-11-04 10:18:03.353888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.734 [2024-11-04 10:18:03.354042] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.908 ms, result 0 00:17:57.734 true 00:17:57.734 10:18:03 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73550 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73550 ']' 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73550 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73550 00:17:57.734 killing process with pid 73550 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73550' 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73550 00:17:57.734 10:18:03 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73550 00:18:04.290 10:18:09 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:18:04.549 65536+0 records in 00:18:04.549 65536+0 records out 00:18:04.549 268435456 bytes (268 MB, 256 MiB) copied, 1.06857 s, 251 MB/s 00:18:04.549 10:18:10 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:04.549 [2024-11-04 10:18:10.174982] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:04.549 [2024-11-04 10:18:10.175104] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73733 ] 00:18:04.808 [2024-11-04 10:18:10.330547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.808 [2024-11-04 10:18:10.407611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.067 [2024-11-04 10:18:10.613885] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.067 [2024-11-04 10:18:10.613930] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:05.067 [2024-11-04 10:18:10.761466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.761600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:05.067 [2024-11-04 10:18:10.761615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:05.067 [2024-11-04 10:18:10.761622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.763679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.763708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:05.067 [2024-11-04 10:18:10.763715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.040 ms 00:18:05.067 [2024-11-04 10:18:10.763721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.763776] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:05.067 [2024-11-04 10:18:10.764292] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:05.067 [2024-11-04 10:18:10.764307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.764313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:05.067 [2024-11-04 10:18:10.764320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:05.067 [2024-11-04 10:18:10.764326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.765310] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:05.067 [2024-11-04 10:18:10.775024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.775240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:05.067 [2024-11-04 10:18:10.775257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.715 ms 00:18:05.067 [2024-11-04 10:18:10.775263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.775323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.775332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:05.067 [2024-11-04 10:18:10.775339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:05.067 [2024-11-04 10:18:10.775344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.779645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.779671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:05.067 [2024-11-04 10:18:10.779679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.269 ms 00:18:05.067 [2024-11-04 10:18:10.779685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.779751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.779759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:05.067 [2024-11-04 10:18:10.779765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:05.067 [2024-11-04 10:18:10.779771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.779798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.779804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:05.067 [2024-11-04 10:18:10.779812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:05.067 [2024-11-04 10:18:10.779818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.779835] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:05.067 [2024-11-04 10:18:10.782499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.782595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:05.067 [2024-11-04 10:18:10.782606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.667 ms 00:18:05.067 [2024-11-04 10:18:10.782612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.782640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.782647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:05.067 [2024-11-04 10:18:10.782653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:05.067 [2024-11-04 10:18:10.782658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.067 [2024-11-04 10:18:10.782671] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:05.067 [2024-11-04 10:18:10.782686] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:05.067 [2024-11-04 10:18:10.782713] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:05.067 [2024-11-04 10:18:10.782725] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:05.067 [2024-11-04 10:18:10.782815] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:05.067 [2024-11-04 10:18:10.782824] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:05.067 [2024-11-04 10:18:10.782831] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:05.067 [2024-11-04 10:18:10.782839] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:05.067 [2024-11-04 10:18:10.782845] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:05.067 [2024-11-04 10:18:10.782853] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:05.067 [2024-11-04 10:18:10.782859] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:05.067 [2024-11-04 10:18:10.782864] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:05.067 [2024-11-04 10:18:10.782870] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:05.067 [2024-11-04 10:18:10.782876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.067 [2024-11-04 10:18:10.782881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:05.067 [2024-11-04 10:18:10.782887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:18:05.068 [2024-11-04 10:18:10.782893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.068 [2024-11-04 10:18:10.782959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.068 [2024-11-04 10:18:10.782966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:05.068 [2024-11-04 10:18:10.782972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:18:05.068 [2024-11-04 10:18:10.782979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.068 [2024-11-04 10:18:10.783052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:05.068 [2024-11-04 10:18:10.783059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:05.068 [2024-11-04 10:18:10.783065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:05.068 [2024-11-04 10:18:10.783082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783088] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:05.068 [2024-11-04 10:18:10.783099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.068 [2024-11-04 10:18:10.783109] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:05.068 [2024-11-04 10:18:10.783114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:05.068 [2024-11-04 10:18:10.783121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:05.068 [2024-11-04 10:18:10.783131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:05.068 [2024-11-04 10:18:10.783136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:05.068 [2024-11-04 10:18:10.783141] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:05.068 [2024-11-04 10:18:10.783151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783157] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:05.068 [2024-11-04 10:18:10.783167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783177] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:05.068 [2024-11-04 10:18:10.783182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:05.068 [2024-11-04 10:18:10.783197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:05.068 [2024-11-04 10:18:10.783210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783215] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783220] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:05.068 [2024-11-04 10:18:10.783225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.068 [2024-11-04 10:18:10.783235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:05.068 [2024-11-04 10:18:10.783239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:05.068 [2024-11-04 10:18:10.783244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:05.068 [2024-11-04 10:18:10.783249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:05.068 [2024-11-04 10:18:10.783254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:05.068 [2024-11-04 10:18:10.783259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:05.068 [2024-11-04 10:18:10.783269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:05.068 [2024-11-04 10:18:10.783273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:05.068 [2024-11-04 10:18:10.783286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:05.068 [2024-11-04 10:18:10.783291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:05.068 [2024-11-04 10:18:10.783304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:05.068 [2024-11-04 10:18:10.783310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:05.068 [2024-11-04 10:18:10.783315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:05.068 [2024-11-04 10:18:10.783320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:05.068 [2024-11-04 10:18:10.783325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:05.068 [2024-11-04 10:18:10.783330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:05.068 [2024-11-04 10:18:10.783336] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:05.068 [2024-11-04 10:18:10.783343] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783349] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:05.068 [2024-11-04 10:18:10.783355] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:05.068 [2024-11-04 10:18:10.783360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:05.068 [2024-11-04 10:18:10.783365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:05.068 [2024-11-04 10:18:10.783371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:05.068 [2024-11-04 10:18:10.783376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:05.068 [2024-11-04 10:18:10.783381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:05.068 [2024-11-04 10:18:10.783386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:05.068 [2024-11-04 10:18:10.783392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:05.068 [2024-11-04 10:18:10.783397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:05.068 [2024-11-04 10:18:10.783424] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:05.068 [2024-11-04 10:18:10.783430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:05.068 [2024-11-04 10:18:10.783442] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:05.068 [2024-11-04 10:18:10.783447] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:05.068 [2024-11-04 10:18:10.783452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:05.068 [2024-11-04 10:18:10.783458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.068 [2024-11-04 10:18:10.783467] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:05.068 [2024-11-04 10:18:10.783472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:18:05.068 [2024-11-04 10:18:10.783480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.068 [2024-11-04 10:18:10.804296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.068 [2024-11-04 10:18:10.804325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:05.068 [2024-11-04 10:18:10.804334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.779 ms 00:18:05.068 [2024-11-04 10:18:10.804340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.068 [2024-11-04 10:18:10.804437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.068 [2024-11-04 10:18:10.804445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:05.068 [2024-11-04 10:18:10.804454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:05.069 [2024-11-04 10:18:10.804460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.843401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.843431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:05.327 [2024-11-04 10:18:10.843440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.925 ms 00:18:05.327 [2024-11-04 10:18:10.843446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.843507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.843515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:05.327 [2024-11-04 10:18:10.843522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:05.327 [2024-11-04 10:18:10.843528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.843822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.843834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:05.327 [2024-11-04 10:18:10.843841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:18:05.327 [2024-11-04 10:18:10.843847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.843948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.843959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:05.327 [2024-11-04 10:18:10.843965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:18:05.327 [2024-11-04 10:18:10.843971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.854690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.854717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:05.327 [2024-11-04 10:18:10.854725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.703 ms 00:18:05.327 [2024-11-04 10:18:10.854731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.864219] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:05.327 [2024-11-04 10:18:10.864340] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:05.327 [2024-11-04 10:18:10.864354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.864360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:05.327 [2024-11-04 10:18:10.864367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.517 ms 00:18:05.327 [2024-11-04 10:18:10.864372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.882900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.882992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:05.327 [2024-11-04 10:18:10.883011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.485 ms 00:18:05.327 [2024-11-04 10:18:10.883017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.891878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.891902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:05.327 [2024-11-04 10:18:10.891910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.811 ms 00:18:05.327 [2024-11-04 10:18:10.891915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.900397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.900420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:05.327 [2024-11-04 10:18:10.900427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.442 ms 00:18:05.327 [2024-11-04 10:18:10.900433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.900907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.900923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:05.327 [2024-11-04 10:18:10.900930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:18:05.327 [2024-11-04 10:18:10.900935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.945110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.945150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:05.327 [2024-11-04 10:18:10.945160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.157 ms 00:18:05.327 [2024-11-04 10:18:10.945166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.952853] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:05.327 [2024-11-04 10:18:10.964054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.964082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:05.327 [2024-11-04 10:18:10.964092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.826 ms 00:18:05.327 [2024-11-04 10:18:10.964099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.964170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.964179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:05.327 [2024-11-04 10:18:10.964188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:05.327 [2024-11-04 10:18:10.964193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.964229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.964236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:05.327 [2024-11-04 10:18:10.964242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:05.327 [2024-11-04 10:18:10.964248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.964280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.327 [2024-11-04 10:18:10.964290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:05.327 [2024-11-04 10:18:10.964296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:05.327 [2024-11-04 10:18:10.964304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.327 [2024-11-04 10:18:10.964328] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:05.327 [2024-11-04 10:18:10.964335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.328 [2024-11-04 10:18:10.964341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:05.328 [2024-11-04 10:18:10.964347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:05.328 [2024-11-04 10:18:10.964352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.328 [2024-11-04 10:18:10.982104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.328 [2024-11-04 10:18:10.982201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:05.328 [2024-11-04 10:18:10.982218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.735 ms 00:18:05.328 [2024-11-04 10:18:10.982224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.328 [2024-11-04 10:18:10.982292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:05.328 [2024-11-04 10:18:10.982301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:05.328 [2024-11-04 10:18:10.982307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:05.328 [2024-11-04 10:18:10.982313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:05.328 [2024-11-04 10:18:10.982947] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:05.328 [2024-11-04 10:18:10.985252] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 221.235 ms, result 0 00:18:05.328 [2024-11-04 10:18:10.986019] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:05.328 [2024-11-04 10:18:11.000877] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.700  [2024-11-04T10:18:13.011Z] Copying: 24/256 [MB] (24 MBps) [2024-11-04T10:18:14.385Z] Copying: 53/256 [MB] (29 MBps) [2024-11-04T10:18:15.358Z] Copying: 77/256 [MB] (23 MBps) [2024-11-04T10:18:16.293Z] Copying: 105/256 [MB] (28 MBps) [2024-11-04T10:18:17.226Z] Copying: 126/256 [MB] (20 MBps) [2024-11-04T10:18:18.159Z] Copying: 153/256 [MB] (27 MBps) [2024-11-04T10:18:19.101Z] Copying: 174/256 [MB] (21 MBps) [2024-11-04T10:18:20.108Z] Copying: 193/256 [MB] (19 MBps) [2024-11-04T10:18:21.041Z] Copying: 211/256 [MB] (17 MBps) [2024-11-04T10:18:22.414Z] Copying: 225/256 [MB] (14 MBps) [2024-11-04T10:18:23.348Z] Copying: 242/256 [MB] (16 MBps) [2024-11-04T10:18:23.348Z] Copying: 255/256 [MB] (12 MBps) [2024-11-04T10:18:23.348Z] Copying: 256/256 [MB] (average 21 MBps)[2024-11-04 10:18:23.049847] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:17.603 [2024-11-04 10:18:23.059173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.059207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.603 [2024-11-04 10:18:23.059219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:17.603 [2024-11-04 10:18:23.059227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.059249] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:17.603 [2024-11-04 10:18:23.061897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.061924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.603 [2024-11-04 10:18:23.061939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:18:17.603 [2024-11-04 10:18:23.061946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.064389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.064419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.603 [2024-11-04 10:18:23.064436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.422 ms 00:18:17.603 [2024-11-04 10:18:23.064443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.072232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.072261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.603 [2024-11-04 10:18:23.072286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.773 ms 00:18:17.603 [2024-11-04 10:18:23.072297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.079471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.079499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.603 [2024-11-04 10:18:23.079509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.131 ms 00:18:17.603 [2024-11-04 10:18:23.079517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.103286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.103317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.603 [2024-11-04 10:18:23.103328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.727 ms 00:18:17.603 [2024-11-04 10:18:23.103334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.117725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.117870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.603 [2024-11-04 10:18:23.117887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.357 ms 00:18:17.603 [2024-11-04 10:18:23.117899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.118052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.118063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.603 [2024-11-04 10:18:23.118072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:17.603 [2024-11-04 10:18:23.118079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.142111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.142142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.603 [2024-11-04 10:18:23.142153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:18:17.603 [2024-11-04 10:18:23.142160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.165906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.165939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.603 [2024-11-04 10:18:23.165949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.712 ms 00:18:17.603 [2024-11-04 10:18:23.165956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.188239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.603 [2024-11-04 10:18:23.188275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.603 [2024-11-04 10:18:23.188285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.227 ms 00:18:17.603 [2024-11-04 10:18:23.188292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.603 [2024-11-04 10:18:23.210788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.604 [2024-11-04 10:18:23.210820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.604 [2024-11-04 10:18:23.210829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.430 ms 00:18:17.604 [2024-11-04 10:18:23.210836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.604 [2024-11-04 10:18:23.210870] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.604 [2024-11-04 10:18:23.210883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.210995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.604 [2024-11-04 10:18:23.211414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.605 [2024-11-04 10:18:23.211625] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.605 [2024-11-04 10:18:23.211634] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:17.605 [2024-11-04 10:18:23.211642] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.605 [2024-11-04 10:18:23.211650] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.605 [2024-11-04 10:18:23.211657] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.605 [2024-11-04 10:18:23.211664] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.605 [2024-11-04 10:18:23.211671] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.605 [2024-11-04 10:18:23.211678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.605 [2024-11-04 10:18:23.211685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.605 [2024-11-04 10:18:23.211692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.605 [2024-11-04 10:18:23.211698] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.605 [2024-11-04 10:18:23.211705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.605 [2024-11-04 10:18:23.211712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.605 [2024-11-04 10:18:23.211720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.836 ms 00:18:17.605 [2024-11-04 10:18:23.211729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.224530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.605 [2024-11-04 10:18:23.224561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.605 [2024-11-04 10:18:23.224571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.785 ms 00:18:17.605 [2024-11-04 10:18:23.224578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.224947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.605 [2024-11-04 10:18:23.224963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.605 [2024-11-04 10:18:23.224976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.339 ms 00:18:17.605 [2024-11-04 10:18:23.224983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.259960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.605 [2024-11-04 10:18:23.259993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.605 [2024-11-04 10:18:23.260003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.605 [2024-11-04 10:18:23.260010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.260083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.605 [2024-11-04 10:18:23.260092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.605 [2024-11-04 10:18:23.260102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.605 [2024-11-04 10:18:23.260109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.260153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.605 [2024-11-04 10:18:23.260162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.605 [2024-11-04 10:18:23.260170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.605 [2024-11-04 10:18:23.260178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.260194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.605 [2024-11-04 10:18:23.260201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.605 [2024-11-04 10:18:23.260209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.605 [2024-11-04 10:18:23.260218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.605 [2024-11-04 10:18:23.337309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.605 [2024-11-04 10:18:23.337463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.605 [2024-11-04 10:18:23.337479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.605 [2024-11-04 10:18:23.337487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.400649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.400797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.864 [2024-11-04 10:18:23.400813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.400826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.400870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.400879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.864 [2024-11-04 10:18:23.400887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.400894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.400920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.400928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.864 [2024-11-04 10:18:23.400936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.400943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.401031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.401041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.864 [2024-11-04 10:18:23.401049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.401056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.401085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.401093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.864 [2024-11-04 10:18:23.401101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.401108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.401145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.401154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.864 [2024-11-04 10:18:23.401161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.401168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.401208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.864 [2024-11-04 10:18:23.401218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.864 [2024-11-04 10:18:23.401226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.864 [2024-11-04 10:18:23.401233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.864 [2024-11-04 10:18:23.401362] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.180 ms, result 0 00:18:18.798 00:18:18.798 00:18:18.798 10:18:24 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73885 00:18:18.798 10:18:24 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73885 00:18:18.798 10:18:24 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73885 ']' 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:18.798 10:18:24 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:18.798 [2024-11-04 10:18:24.395107] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:18.798 [2024-11-04 10:18:24.395228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73885 ] 00:18:19.056 [2024-11-04 10:18:24.554860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.056 [2024-11-04 10:18:24.652212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:19.621 10:18:25 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:19.621 10:18:25 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:19.621 10:18:25 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:19.879 [2024-11-04 10:18:25.433135] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.879 [2024-11-04 10:18:25.433194] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.879 [2024-11-04 10:18:25.607002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.879 [2024-11-04 10:18:25.607186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.879 [2024-11-04 10:18:25.607209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.879 [2024-11-04 10:18:25.607218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.879 [2024-11-04 10:18:25.609881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.879 [2024-11-04 10:18:25.609915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.879 [2024-11-04 10:18:25.609926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.642 ms 00:18:19.879 [2024-11-04 10:18:25.609933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.879 [2024-11-04 10:18:25.610006] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.879 [2024-11-04 10:18:25.610725] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.879 [2024-11-04 10:18:25.610749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.879 [2024-11-04 10:18:25.610757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.879 [2024-11-04 10:18:25.610767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.752 ms 00:18:19.879 [2024-11-04 10:18:25.610775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.879 [2024-11-04 10:18:25.612163] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:20.139 [2024-11-04 10:18:25.624530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.624574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:20.139 [2024-11-04 10:18:25.624586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.377 ms 00:18:20.139 [2024-11-04 10:18:25.624595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.624681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.624694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:20.139 [2024-11-04 10:18:25.624702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:18:20.139 [2024-11-04 10:18:25.624711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.629712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.629749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:20.139 [2024-11-04 10:18:25.629758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.956 ms 00:18:20.139 [2024-11-04 10:18:25.629766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.629871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.629883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:20.139 [2024-11-04 10:18:25.629891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:20.139 [2024-11-04 10:18:25.629900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.629924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.629936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:20.139 [2024-11-04 10:18:25.629944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:20.139 [2024-11-04 10:18:25.629953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.629975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:20.139 [2024-11-04 10:18:25.633316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.633342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:20.139 [2024-11-04 10:18:25.633353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.344 ms 00:18:20.139 [2024-11-04 10:18:25.633360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.633396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.633404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:20.139 [2024-11-04 10:18:25.633413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:20.139 [2024-11-04 10:18:25.633420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.633441] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:20.139 [2024-11-04 10:18:25.633458] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:20.139 [2024-11-04 10:18:25.633499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:20.139 [2024-11-04 10:18:25.633514] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:20.139 [2024-11-04 10:18:25.633619] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:20.139 [2024-11-04 10:18:25.633629] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:20.139 [2024-11-04 10:18:25.633641] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:20.139 [2024-11-04 10:18:25.633651] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:20.139 [2024-11-04 10:18:25.633663] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:20.139 [2024-11-04 10:18:25.633671] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:20.139 [2024-11-04 10:18:25.633680] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:20.139 [2024-11-04 10:18:25.633687] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:20.139 [2024-11-04 10:18:25.633698] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:20.139 [2024-11-04 10:18:25.633705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.633714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:20.139 [2024-11-04 10:18:25.633721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:20.139 [2024-11-04 10:18:25.633729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.633830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.139 [2024-11-04 10:18:25.633841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:20.139 [2024-11-04 10:18:25.633850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:20.139 [2024-11-04 10:18:25.633859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.139 [2024-11-04 10:18:25.633969] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:20.139 [2024-11-04 10:18:25.633981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:20.139 [2024-11-04 10:18:25.633989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.139 [2024-11-04 10:18:25.633998] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:20.139 [2024-11-04 10:18:25.634014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:20.139 [2024-11-04 10:18:25.634040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.139 [2024-11-04 10:18:25.634056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:20.139 [2024-11-04 10:18:25.634064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:20.139 [2024-11-04 10:18:25.634071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:20.139 [2024-11-04 10:18:25.634079] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:20.139 [2024-11-04 10:18:25.634086] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:20.139 [2024-11-04 10:18:25.634094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:20.139 [2024-11-04 10:18:25.634109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:20.139 [2024-11-04 10:18:25.634135] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:20.139 [2024-11-04 10:18:25.634160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:20.139 [2024-11-04 10:18:25.634181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:20.139 [2024-11-04 10:18:25.634203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:20.139 [2024-11-04 10:18:25.634225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.139 [2024-11-04 10:18:25.634239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:20.139 [2024-11-04 10:18:25.634247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:20.139 [2024-11-04 10:18:25.634253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:20.139 [2024-11-04 10:18:25.634261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:20.139 [2024-11-04 10:18:25.634268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:20.139 [2024-11-04 10:18:25.634277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:20.139 [2024-11-04 10:18:25.634291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:20.139 [2024-11-04 10:18:25.634299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634307] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:20.139 [2024-11-04 10:18:25.634315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:20.139 [2024-11-04 10:18:25.634323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:20.139 [2024-11-04 10:18:25.634332] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:20.139 [2024-11-04 10:18:25.634341] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:20.139 [2024-11-04 10:18:25.634347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:20.140 [2024-11-04 10:18:25.634356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:20.140 [2024-11-04 10:18:25.634363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:20.140 [2024-11-04 10:18:25.634371] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:20.140 [2024-11-04 10:18:25.634377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:20.140 [2024-11-04 10:18:25.634387] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:20.140 [2024-11-04 10:18:25.634395] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:20.140 [2024-11-04 10:18:25.634415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:20.140 [2024-11-04 10:18:25.634423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:20.140 [2024-11-04 10:18:25.634430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:20.140 [2024-11-04 10:18:25.634439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:20.140 [2024-11-04 10:18:25.634446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:20.140 [2024-11-04 10:18:25.634454] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:20.140 [2024-11-04 10:18:25.634461] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:20.140 [2024-11-04 10:18:25.634470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:20.140 [2024-11-04 10:18:25.634477] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:20.140 [2024-11-04 10:18:25.634517] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:20.140 [2024-11-04 10:18:25.634525] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634536] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:20.140 [2024-11-04 10:18:25.634543] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:20.140 [2024-11-04 10:18:25.634552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:20.140 [2024-11-04 10:18:25.634559] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:20.140 [2024-11-04 10:18:25.634568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.634575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:20.140 [2024-11-04 10:18:25.634584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.665 ms 00:18:20.140 [2024-11-04 10:18:25.634591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.660419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.660560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:20.140 [2024-11-04 10:18:25.660579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.751 ms 00:18:20.140 [2024-11-04 10:18:25.660587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.660704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.660715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:20.140 [2024-11-04 10:18:25.660725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:20.140 [2024-11-04 10:18:25.660732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.691068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.691193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:20.140 [2024-11-04 10:18:25.691213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.314 ms 00:18:20.140 [2024-11-04 10:18:25.691223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.691280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.691289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:20.140 [2024-11-04 10:18:25.691299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:20.140 [2024-11-04 10:18:25.691306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.691619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.691633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:20.140 [2024-11-04 10:18:25.691643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.291 ms 00:18:20.140 [2024-11-04 10:18:25.691651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.691775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.691807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:20.140 [2024-11-04 10:18:25.691817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:18:20.140 [2024-11-04 10:18:25.691824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.706118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.706229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:20.140 [2024-11-04 10:18:25.706247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.272 ms 00:18:20.140 [2024-11-04 10:18:25.706255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.719123] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:20.140 [2024-11-04 10:18:25.719157] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:20.140 [2024-11-04 10:18:25.719170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.719178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:20.140 [2024-11-04 10:18:25.719188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:18:20.140 [2024-11-04 10:18:25.719195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.743365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.743395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:20.140 [2024-11-04 10:18:25.743407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.100 ms 00:18:20.140 [2024-11-04 10:18:25.743415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.755292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.755317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:20.140 [2024-11-04 10:18:25.755330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.809 ms 00:18:20.140 [2024-11-04 10:18:25.755338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.766746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.766770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:20.140 [2024-11-04 10:18:25.766789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.344 ms 00:18:20.140 [2024-11-04 10:18:25.766796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.767405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.767424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:20.140 [2024-11-04 10:18:25.767434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:18:20.140 [2024-11-04 10:18:25.767441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.832019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.832065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:20.140 [2024-11-04 10:18:25.832081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.552 ms 00:18:20.140 [2024-11-04 10:18:25.832089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.842508] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:20.140 [2024-11-04 10:18:25.856352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.856389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:20.140 [2024-11-04 10:18:25.856400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.168 ms 00:18:20.140 [2024-11-04 10:18:25.856412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.856485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.856497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:20.140 [2024-11-04 10:18:25.856506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:20.140 [2024-11-04 10:18:25.856515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.856562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.856572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:20.140 [2024-11-04 10:18:25.856580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:20.140 [2024-11-04 10:18:25.856588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.856613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.140 [2024-11-04 10:18:25.856622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:20.140 [2024-11-04 10:18:25.856630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:20.140 [2024-11-04 10:18:25.856641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.140 [2024-11-04 10:18:25.856671] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:20.140 [2024-11-04 10:18:25.856683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.141 [2024-11-04 10:18:25.856691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:20.141 [2024-11-04 10:18:25.856700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:20.141 [2024-11-04 10:18:25.856709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.399 [2024-11-04 10:18:25.880099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.399 [2024-11-04 10:18:25.880129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:20.399 [2024-11-04 10:18:25.880142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.366 ms 00:18:20.399 [2024-11-04 10:18:25.880150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.399 [2024-11-04 10:18:25.880237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.399 [2024-11-04 10:18:25.880248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:20.399 [2024-11-04 10:18:25.880258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:20.399 [2024-11-04 10:18:25.880280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.399 [2024-11-04 10:18:25.881365] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.399 [2024-11-04 10:18:25.884446] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.084 ms, result 0 00:18:20.399 [2024-11-04 10:18:25.886308] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:20.399 Some configs were skipped because the RPC state that can call them passed over. 00:18:20.399 10:18:25 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:20.399 [2024-11-04 10:18:26.109828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.399 [2024-11-04 10:18:26.109873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:20.399 [2024-11-04 10:18:26.109884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.707 ms 00:18:20.399 [2024-11-04 10:18:26.109894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.399 [2024-11-04 10:18:26.109926] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.807 ms, result 0 00:18:20.399 true 00:18:20.399 10:18:26 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:20.692 [2024-11-04 10:18:26.309528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:20.692 [2024-11-04 10:18:26.309572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:20.692 [2024-11-04 10:18:26.309585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.194 ms 00:18:20.692 [2024-11-04 10:18:26.309593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:20.692 [2024-11-04 10:18:26.309629] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.299 ms, result 0 00:18:20.692 true 00:18:20.692 10:18:26 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73885 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73885 ']' 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73885 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73885 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:20.692 killing process with pid 73885 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73885' 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73885 00:18:20.692 10:18:26 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73885 00:18:21.633 [2024-11-04 10:18:27.051123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.051179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:21.633 [2024-11-04 10:18:27.051192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:21.633 [2024-11-04 10:18:27.051202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.051223] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:21.633 [2024-11-04 10:18:27.053827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.053859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:21.633 [2024-11-04 10:18:27.053874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.587 ms 00:18:21.633 [2024-11-04 10:18:27.053883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.054182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.054198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:21.633 [2024-11-04 10:18:27.054209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:18:21.633 [2024-11-04 10:18:27.054216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.058355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.058386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:21.633 [2024-11-04 10:18:27.058397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.119 ms 00:18:21.633 [2024-11-04 10:18:27.058407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.065315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.065346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:21.633 [2024-11-04 10:18:27.065360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.873 ms 00:18:21.633 [2024-11-04 10:18:27.065369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.075016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.075050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:21.633 [2024-11-04 10:18:27.075063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.593 ms 00:18:21.633 [2024-11-04 10:18:27.075076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.082448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.082482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:21.633 [2024-11-04 10:18:27.082494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.333 ms 00:18:21.633 [2024-11-04 10:18:27.082505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.082650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.082666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:21.633 [2024-11-04 10:18:27.082676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:21.633 [2024-11-04 10:18:27.082683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.092668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.092700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:21.633 [2024-11-04 10:18:27.092711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.963 ms 00:18:21.633 [2024-11-04 10:18:27.092718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.102351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.102382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:21.633 [2024-11-04 10:18:27.102395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.586 ms 00:18:21.633 [2024-11-04 10:18:27.102402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.111483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.111513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:21.633 [2024-11-04 10:18:27.111525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.044 ms 00:18:21.633 [2024-11-04 10:18:27.111532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.120508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.633 [2024-11-04 10:18:27.120539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:21.633 [2024-11-04 10:18:27.120549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.914 ms 00:18:21.633 [2024-11-04 10:18:27.120556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.633 [2024-11-04 10:18:27.120589] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:21.633 [2024-11-04 10:18:27.120603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:21.633 [2024-11-04 10:18:27.120747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.120999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:21.634 [2024-11-04 10:18:27.121448] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:21.634 [2024-11-04 10:18:27.121459] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:21.634 [2024-11-04 10:18:27.121471] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:21.634 [2024-11-04 10:18:27.121482] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:21.634 [2024-11-04 10:18:27.121491] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:21.634 [2024-11-04 10:18:27.121500] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:21.634 [2024-11-04 10:18:27.121506] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:21.634 [2024-11-04 10:18:27.121515] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:21.634 [2024-11-04 10:18:27.121522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:21.634 [2024-11-04 10:18:27.121530] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:21.635 [2024-11-04 10:18:27.121536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:21.635 [2024-11-04 10:18:27.121545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.635 [2024-11-04 10:18:27.121553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:21.635 [2024-11-04 10:18:27.121562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.957 ms 00:18:21.635 [2024-11-04 10:18:27.121569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.133867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.635 [2024-11-04 10:18:27.133896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:21.635 [2024-11-04 10:18:27.133910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.277 ms 00:18:21.635 [2024-11-04 10:18:27.133917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.134275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:21.635 [2024-11-04 10:18:27.134295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:21.635 [2024-11-04 10:18:27.134305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:21.635 [2024-11-04 10:18:27.134313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.177747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.177792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:21.635 [2024-11-04 10:18:27.177805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.177813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.177910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.177919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:21.635 [2024-11-04 10:18:27.177929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.177936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.177983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.177992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:21.635 [2024-11-04 10:18:27.178002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.178009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.178027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.178035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:21.635 [2024-11-04 10:18:27.178043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.178050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.240248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.240299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:21.635 [2024-11-04 10:18:27.240311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.240316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:21.635 [2024-11-04 10:18:27.290111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:21.635 [2024-11-04 10:18:27.290205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:21.635 [2024-11-04 10:18:27.290247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:21.635 [2024-11-04 10:18:27.290338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:21.635 [2024-11-04 10:18:27.290385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:21.635 [2024-11-04 10:18:27.290436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:21.635 [2024-11-04 10:18:27.290482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:21.635 [2024-11-04 10:18:27.290489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:21.635 [2024-11-04 10:18:27.290495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:21.635 [2024-11-04 10:18:27.290594] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 239.458 ms, result 0 00:18:22.202 10:18:27 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:22.202 10:18:27 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:22.202 [2024-11-04 10:18:27.857059] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:22.202 [2024-11-04 10:18:27.857172] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73933 ] 00:18:22.460 [2024-11-04 10:18:28.010517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.460 [2024-11-04 10:18:28.087501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.718 [2024-11-04 10:18:28.293576] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.718 [2024-11-04 10:18:28.293630] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.718 [2024-11-04 10:18:28.447312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.718 [2024-11-04 10:18:28.447357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.718 [2024-11-04 10:18:28.447370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.718 [2024-11-04 10:18:28.447378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.718 [2024-11-04 10:18:28.449996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.718 [2024-11-04 10:18:28.450029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.718 [2024-11-04 10:18:28.450039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.603 ms 00:18:22.718 [2024-11-04 10:18:28.450046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.718 [2024-11-04 10:18:28.450134] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.718 [2024-11-04 10:18:28.450848] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.718 [2024-11-04 10:18:28.450874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.718 [2024-11-04 10:18:28.450882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.718 [2024-11-04 10:18:28.450890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.745 ms 00:18:22.718 [2024-11-04 10:18:28.450897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.718 [2024-11-04 10:18:28.452039] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.977 [2024-11-04 10:18:28.464014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.977 [2024-11-04 10:18:28.464049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.977 [2024-11-04 10:18:28.464064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.977 ms 00:18:22.978 [2024-11-04 10:18:28.464073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.464197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.464214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.978 [2024-11-04 10:18:28.464223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:22.978 [2024-11-04 10:18:28.464230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.468992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.469026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.978 [2024-11-04 10:18:28.469035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.714 ms 00:18:22.978 [2024-11-04 10:18:28.469042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.469124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.469139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.978 [2024-11-04 10:18:28.469148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:22.978 [2024-11-04 10:18:28.469155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.469179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.469192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.978 [2024-11-04 10:18:28.469202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:22.978 [2024-11-04 10:18:28.469209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.469229] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:22.978 [2024-11-04 10:18:28.472371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.472400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.978 [2024-11-04 10:18:28.472410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.147 ms 00:18:22.978 [2024-11-04 10:18:28.472422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.472456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.472464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.978 [2024-11-04 10:18:28.472472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:22.978 [2024-11-04 10:18:28.472479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.472495] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.978 [2024-11-04 10:18:28.472511] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.978 [2024-11-04 10:18:28.472546] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.978 [2024-11-04 10:18:28.472567] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.978 [2024-11-04 10:18:28.472669] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.978 [2024-11-04 10:18:28.472683] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.978 [2024-11-04 10:18:28.472692] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.978 [2024-11-04 10:18:28.472702] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.978 [2024-11-04 10:18:28.472710] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.978 [2024-11-04 10:18:28.472720] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:22.978 [2024-11-04 10:18:28.472727] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.978 [2024-11-04 10:18:28.472734] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.978 [2024-11-04 10:18:28.472741] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.978 [2024-11-04 10:18:28.472749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.472756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.978 [2024-11-04 10:18:28.472763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:22.978 [2024-11-04 10:18:28.472770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.472869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.978 [2024-11-04 10:18:28.472883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.978 [2024-11-04 10:18:28.472890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:22.978 [2024-11-04 10:18:28.472900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.978 [2024-11-04 10:18:28.472998] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.978 [2024-11-04 10:18:28.473012] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.978 [2024-11-04 10:18:28.473020] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.978 [2024-11-04 10:18:28.473044] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.978 [2024-11-04 10:18:28.473064] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.978 [2024-11-04 10:18:28.473077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.978 [2024-11-04 10:18:28.473083] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:22.978 [2024-11-04 10:18:28.473090] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.978 [2024-11-04 10:18:28.473102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.978 [2024-11-04 10:18:28.473108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:22.978 [2024-11-04 10:18:28.473114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.978 [2024-11-04 10:18:28.473127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.978 [2024-11-04 10:18:28.473147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.978 [2024-11-04 10:18:28.473166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.978 [2024-11-04 10:18:28.473185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.978 [2024-11-04 10:18:28.473203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473209] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.978 [2024-11-04 10:18:28.473222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.978 [2024-11-04 10:18:28.473235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.978 [2024-11-04 10:18:28.473242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:22.978 [2024-11-04 10:18:28.473248] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.978 [2024-11-04 10:18:28.473256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.978 [2024-11-04 10:18:28.473262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:22.978 [2024-11-04 10:18:28.473268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.978 [2024-11-04 10:18:28.473281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:22.978 [2024-11-04 10:18:28.473287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473293] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.978 [2024-11-04 10:18:28.473301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.978 [2024-11-04 10:18:28.473308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.978 [2024-11-04 10:18:28.473325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.978 [2024-11-04 10:18:28.473332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.978 [2024-11-04 10:18:28.473338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.978 [2024-11-04 10:18:28.473345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.978 [2024-11-04 10:18:28.473352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.978 [2024-11-04 10:18:28.473358] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.978 [2024-11-04 10:18:28.473366] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.978 [2024-11-04 10:18:28.473375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.978 [2024-11-04 10:18:28.473383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:22.978 [2024-11-04 10:18:28.473391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:22.979 [2024-11-04 10:18:28.473397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:22.979 [2024-11-04 10:18:28.473404] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:22.979 [2024-11-04 10:18:28.473411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:22.979 [2024-11-04 10:18:28.473418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:22.979 [2024-11-04 10:18:28.473425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:22.979 [2024-11-04 10:18:28.473432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:22.979 [2024-11-04 10:18:28.473439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:22.979 [2024-11-04 10:18:28.473446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:22.979 [2024-11-04 10:18:28.473482] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.979 [2024-11-04 10:18:28.473490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.979 [2024-11-04 10:18:28.473504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.979 [2024-11-04 10:18:28.473511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.979 [2024-11-04 10:18:28.473518] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.979 [2024-11-04 10:18:28.473525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.473532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.979 [2024-11-04 10:18:28.473539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:18:22.979 [2024-11-04 10:18:28.473549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.498875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.498906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.979 [2024-11-04 10:18:28.498916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.266 ms 00:18:22.979 [2024-11-04 10:18:28.498923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.499035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.499045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.979 [2024-11-04 10:18:28.499056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:18:22.979 [2024-11-04 10:18:28.499063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.543733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.543772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.979 [2024-11-04 10:18:28.543792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.649 ms 00:18:22.979 [2024-11-04 10:18:28.543800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.543894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.543906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.979 [2024-11-04 10:18:28.543914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.979 [2024-11-04 10:18:28.543922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.544227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.544251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.979 [2024-11-04 10:18:28.544260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:22.979 [2024-11-04 10:18:28.544276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.544398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.544407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.979 [2024-11-04 10:18:28.544415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:18:22.979 [2024-11-04 10:18:28.544422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.557500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.557532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.979 [2024-11-04 10:18:28.557542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.059 ms 00:18:22.979 [2024-11-04 10:18:28.557548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.569655] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:22.979 [2024-11-04 10:18:28.569690] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.979 [2024-11-04 10:18:28.569701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.569708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.979 [2024-11-04 10:18:28.569717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.057 ms 00:18:22.979 [2024-11-04 10:18:28.569724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.593699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.593740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.979 [2024-11-04 10:18:28.593750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.897 ms 00:18:22.979 [2024-11-04 10:18:28.593757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.605008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.605039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.979 [2024-11-04 10:18:28.605049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.178 ms 00:18:22.979 [2024-11-04 10:18:28.605056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.616233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.616264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.979 [2024-11-04 10:18:28.616279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.118 ms 00:18:22.979 [2024-11-04 10:18:28.616285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.616906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.616926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.979 [2024-11-04 10:18:28.616935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:18:22.979 [2024-11-04 10:18:28.616942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.670524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.670571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.979 [2024-11-04 10:18:28.670583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.558 ms 00:18:22.979 [2024-11-04 10:18:28.670591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.680702] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:22.979 [2024-11-04 10:18:28.694005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.694042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.979 [2024-11-04 10:18:28.694054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.324 ms 00:18:22.979 [2024-11-04 10:18:28.694061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.694139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.694152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.979 [2024-11-04 10:18:28.694161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:22.979 [2024-11-04 10:18:28.694168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.694212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.694220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.979 [2024-11-04 10:18:28.694228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:22.979 [2024-11-04 10:18:28.694235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.694261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.694269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.979 [2024-11-04 10:18:28.694279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:22.979 [2024-11-04 10:18:28.694286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.979 [2024-11-04 10:18:28.694314] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.979 [2024-11-04 10:18:28.694322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.979 [2024-11-04 10:18:28.694329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.979 [2024-11-04 10:18:28.694336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:22.980 [2024-11-04 10:18:28.694343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.980 [2024-11-04 10:18:28.717480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.980 [2024-11-04 10:18:28.717518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.980 [2024-11-04 10:18:28.717529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:18:22.980 [2024-11-04 10:18:28.717537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.980 [2024-11-04 10:18:28.717622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.980 [2024-11-04 10:18:28.717632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.980 [2024-11-04 10:18:28.717640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:22.980 [2024-11-04 10:18:28.717648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.980 [2024-11-04 10:18:28.718442] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:23.238 [2024-11-04 10:18:28.721347] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.860 ms, result 0 00:18:23.238 [2024-11-04 10:18:28.722224] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:23.238 [2024-11-04 10:18:28.735011] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:24.171  [2024-11-04T10:18:30.850Z] Copying: 44/256 [MB] (44 MBps) [2024-11-04T10:18:31.785Z] Copying: 77/256 [MB] (32 MBps) [2024-11-04T10:18:33.159Z] Copying: 97/256 [MB] (19 MBps) [2024-11-04T10:18:34.088Z] Copying: 119/256 [MB] (22 MBps) [2024-11-04T10:18:35.021Z] Copying: 138/256 [MB] (18 MBps) [2024-11-04T10:18:35.955Z] Copying: 164/256 [MB] (26 MBps) [2024-11-04T10:18:36.886Z] Copying: 198/256 [MB] (33 MBps) [2024-11-04T10:18:37.841Z] Copying: 223/256 [MB] (24 MBps) [2024-11-04T10:18:38.777Z] Copying: 238/256 [MB] (14 MBps) [2024-11-04T10:18:38.777Z] Copying: 256/256 [MB] (average 26 MBps)[2024-11-04 10:18:38.437296] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.032 [2024-11-04 10:18:38.446539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.446576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:33.032 [2024-11-04 10:18:38.446589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:33.032 [2024-11-04 10:18:38.446597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.446617] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:33.032 [2024-11-04 10:18:38.449185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.449221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:33.032 [2024-11-04 10:18:38.449232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.555 ms 00:18:33.032 [2024-11-04 10:18:38.449240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.449489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.449504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:33.032 [2024-11-04 10:18:38.449511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.230 ms 00:18:33.032 [2024-11-04 10:18:38.449519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.453205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.453225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:33.032 [2024-11-04 10:18:38.453238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.671 ms 00:18:33.032 [2024-11-04 10:18:38.453247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.460171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.460198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:33.032 [2024-11-04 10:18:38.460207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:18:33.032 [2024-11-04 10:18:38.460216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.482868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.482898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:33.032 [2024-11-04 10:18:38.482909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.601 ms 00:18:33.032 [2024-11-04 10:18:38.482917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.497037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.497068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:33.032 [2024-11-04 10:18:38.497084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.100 ms 00:18:33.032 [2024-11-04 10:18:38.497092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.497224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.497233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:33.032 [2024-11-04 10:18:38.497242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:33.032 [2024-11-04 10:18:38.497249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.519680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.519710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:33.032 [2024-11-04 10:18:38.519720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.409 ms 00:18:33.032 [2024-11-04 10:18:38.519727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.542332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.542361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:33.032 [2024-11-04 10:18:38.542371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.585 ms 00:18:33.032 [2024-11-04 10:18:38.542378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.564569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.564601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.032 [2024-11-04 10:18:38.564610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.172 ms 00:18:33.032 [2024-11-04 10:18:38.564617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.586816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.032 [2024-11-04 10:18:38.586846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.032 [2024-11-04 10:18:38.586856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.154 ms 00:18:33.032 [2024-11-04 10:18:38.586863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.032 [2024-11-04 10:18:38.586881] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.032 [2024-11-04 10:18:38.586898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.586993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.032 [2024-11-04 10:18:38.587513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.033 [2024-11-04 10:18:38.587629] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.033 [2024-11-04 10:18:38.587636] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:33.033 [2024-11-04 10:18:38.587644] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.033 [2024-11-04 10:18:38.587651] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.033 [2024-11-04 10:18:38.587658] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.033 [2024-11-04 10:18:38.587665] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.033 [2024-11-04 10:18:38.587672] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.033 [2024-11-04 10:18:38.587679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.033 [2024-11-04 10:18:38.587686] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.033 [2024-11-04 10:18:38.587692] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.033 [2024-11-04 10:18:38.587699] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.033 [2024-11-04 10:18:38.587706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.033 [2024-11-04 10:18:38.587713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.033 [2024-11-04 10:18:38.587721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.825 ms 00:18:33.033 [2024-11-04 10:18:38.587730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.600827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.033 [2024-11-04 10:18:38.600862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.033 [2024-11-04 10:18:38.600876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.079 ms 00:18:33.033 [2024-11-04 10:18:38.600886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.601372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.033 [2024-11-04 10:18:38.601409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.033 [2024-11-04 10:18:38.601422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.441 ms 00:18:33.033 [2024-11-04 10:18:38.601433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.639335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.033 [2024-11-04 10:18:38.639381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.033 [2024-11-04 10:18:38.639395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.033 [2024-11-04 10:18:38.639403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.639489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.033 [2024-11-04 10:18:38.639501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.033 [2024-11-04 10:18:38.639509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.033 [2024-11-04 10:18:38.639516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.639560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.033 [2024-11-04 10:18:38.639568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.033 [2024-11-04 10:18:38.639576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.033 [2024-11-04 10:18:38.639583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.639600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.033 [2024-11-04 10:18:38.639608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.033 [2024-11-04 10:18:38.639618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.033 [2024-11-04 10:18:38.639625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.033 [2024-11-04 10:18:38.716464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.033 [2024-11-04 10:18:38.716510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.033 [2024-11-04 10:18:38.716521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.033 [2024-11-04 10:18:38.716530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.294 [2024-11-04 10:18:38.780228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.294 [2024-11-04 10:18:38.780324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.294 [2024-11-04 10:18:38.780374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.294 [2024-11-04 10:18:38.780492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.294 [2024-11-04 10:18:38.780549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.294 [2024-11-04 10:18:38.780610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.294 [2024-11-04 10:18:38.780669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.294 [2024-11-04 10:18:38.780677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.294 [2024-11-04 10:18:38.780684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.294 [2024-11-04 10:18:38.780829] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 334.263 ms, result 0 00:18:33.860 00:18:33.860 00:18:33.860 10:18:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:33.860 10:18:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:34.431 10:18:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:34.431 [2024-11-04 10:18:40.048300] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:34.431 [2024-11-04 10:18:40.048419] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74066 ] 00:18:34.690 [2024-11-04 10:18:40.205889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.690 [2024-11-04 10:18:40.297801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.947 [2024-11-04 10:18:40.546736] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:34.947 [2024-11-04 10:18:40.546804] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:35.207 [2024-11-04 10:18:40.700702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.700748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:35.207 [2024-11-04 10:18:40.700761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:35.207 [2024-11-04 10:18:40.700770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.703355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.703389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.207 [2024-11-04 10:18:40.703399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.560 ms 00:18:35.207 [2024-11-04 10:18:40.703406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.703559] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:35.207 [2024-11-04 10:18:40.704230] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:35.207 [2024-11-04 10:18:40.704251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.704259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.207 [2024-11-04 10:18:40.704276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.699 ms 00:18:35.207 [2024-11-04 10:18:40.704284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.705411] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:35.207 [2024-11-04 10:18:40.717768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.717810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:35.207 [2024-11-04 10:18:40.717824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.358 ms 00:18:35.207 [2024-11-04 10:18:40.717832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.717910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.717922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:35.207 [2024-11-04 10:18:40.717931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:35.207 [2024-11-04 10:18:40.717938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.722524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.722557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.207 [2024-11-04 10:18:40.722566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.547 ms 00:18:35.207 [2024-11-04 10:18:40.722573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.722654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.722663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.207 [2024-11-04 10:18:40.722671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:18:35.207 [2024-11-04 10:18:40.722678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.722701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.722709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:35.207 [2024-11-04 10:18:40.722719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:35.207 [2024-11-04 10:18:40.722726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.722744] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:35.207 [2024-11-04 10:18:40.725903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.725929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.207 [2024-11-04 10:18:40.725939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.163 ms 00:18:35.207 [2024-11-04 10:18:40.725946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.725978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.207 [2024-11-04 10:18:40.725987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:35.207 [2024-11-04 10:18:40.725995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:35.207 [2024-11-04 10:18:40.726002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.207 [2024-11-04 10:18:40.726018] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:35.207 [2024-11-04 10:18:40.726034] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:35.207 [2024-11-04 10:18:40.726069] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:35.208 [2024-11-04 10:18:40.726084] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:35.208 [2024-11-04 10:18:40.726185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:35.208 [2024-11-04 10:18:40.726201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:35.208 [2024-11-04 10:18:40.726212] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:35.208 [2024-11-04 10:18:40.726221] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726230] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726240] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:35.208 [2024-11-04 10:18:40.726248] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:35.208 [2024-11-04 10:18:40.726255] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:35.208 [2024-11-04 10:18:40.726262] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:35.208 [2024-11-04 10:18:40.726269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.208 [2024-11-04 10:18:40.726276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:35.208 [2024-11-04 10:18:40.726283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:18:35.208 [2024-11-04 10:18:40.726290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.208 [2024-11-04 10:18:40.726376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.208 [2024-11-04 10:18:40.726384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:35.208 [2024-11-04 10:18:40.726392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:35.208 [2024-11-04 10:18:40.726401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.208 [2024-11-04 10:18:40.726497] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:35.208 [2024-11-04 10:18:40.726512] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:35.208 [2024-11-04 10:18:40.726520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:35.208 [2024-11-04 10:18:40.726542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:35.208 [2024-11-04 10:18:40.726563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.208 [2024-11-04 10:18:40.726576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:35.208 [2024-11-04 10:18:40.726583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:35.208 [2024-11-04 10:18:40.726589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:35.208 [2024-11-04 10:18:40.726602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:35.208 [2024-11-04 10:18:40.726609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:35.208 [2024-11-04 10:18:40.726615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:35.208 [2024-11-04 10:18:40.726628] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:35.208 [2024-11-04 10:18:40.726647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:35.208 [2024-11-04 10:18:40.726666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726678] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:35.208 [2024-11-04 10:18:40.726685] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726697] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:35.208 [2024-11-04 10:18:40.726704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:35.208 [2024-11-04 10:18:40.726723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726729] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.208 [2024-11-04 10:18:40.726735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:35.208 [2024-11-04 10:18:40.726741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:35.208 [2024-11-04 10:18:40.726747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:35.208 [2024-11-04 10:18:40.726755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:35.208 [2024-11-04 10:18:40.726761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:35.208 [2024-11-04 10:18:40.726767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:35.208 [2024-11-04 10:18:40.726793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:35.208 [2024-11-04 10:18:40.726801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726807] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:35.208 [2024-11-04 10:18:40.726815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:35.208 [2024-11-04 10:18:40.726822] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726829] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:35.208 [2024-11-04 10:18:40.726839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:35.208 [2024-11-04 10:18:40.726845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:35.208 [2024-11-04 10:18:40.726852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:35.208 [2024-11-04 10:18:40.726858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:35.208 [2024-11-04 10:18:40.726865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:35.208 [2024-11-04 10:18:40.726871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:35.208 [2024-11-04 10:18:40.726879] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:35.208 [2024-11-04 10:18:40.726888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.726896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:35.208 [2024-11-04 10:18:40.726904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:35.208 [2024-11-04 10:18:40.726911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:35.208 [2024-11-04 10:18:40.726918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:35.208 [2024-11-04 10:18:40.726925] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:35.208 [2024-11-04 10:18:40.726932] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:35.208 [2024-11-04 10:18:40.726939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:35.208 [2024-11-04 10:18:40.726946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:35.208 [2024-11-04 10:18:40.726953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:35.208 [2024-11-04 10:18:40.726960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.726967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.726973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.726980] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.726987] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:35.208 [2024-11-04 10:18:40.726995] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:35.208 [2024-11-04 10:18:40.727003] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.727010] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:35.208 [2024-11-04 10:18:40.727017] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:35.208 [2024-11-04 10:18:40.727024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:35.208 [2024-11-04 10:18:40.727032] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:35.208 [2024-11-04 10:18:40.727039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.208 [2024-11-04 10:18:40.727046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:35.208 [2024-11-04 10:18:40.727054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.610 ms 00:18:35.208 [2024-11-04 10:18:40.727063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.208 [2024-11-04 10:18:40.752177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.208 [2024-11-04 10:18:40.752208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.209 [2024-11-04 10:18:40.752218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.053 ms 00:18:35.209 [2024-11-04 10:18:40.752225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.752347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.752357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:35.209 [2024-11-04 10:18:40.752368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:18:35.209 [2024-11-04 10:18:40.752375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.797713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.797752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.209 [2024-11-04 10:18:40.797764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.318 ms 00:18:35.209 [2024-11-04 10:18:40.797772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.797870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.797882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.209 [2024-11-04 10:18:40.797891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.209 [2024-11-04 10:18:40.797899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.798204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.798227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.209 [2024-11-04 10:18:40.798236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:18:35.209 [2024-11-04 10:18:40.798243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.798368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.798385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.209 [2024-11-04 10:18:40.798393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:18:35.209 [2024-11-04 10:18:40.798400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.811518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.811549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.209 [2024-11-04 10:18:40.811558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.100 ms 00:18:35.209 [2024-11-04 10:18:40.811566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.823832] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:35.209 [2024-11-04 10:18:40.823866] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:35.209 [2024-11-04 10:18:40.823876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.823884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:35.209 [2024-11-04 10:18:40.823892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.219 ms 00:18:35.209 [2024-11-04 10:18:40.823900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.856882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.856933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:35.209 [2024-11-04 10:18:40.856946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.913 ms 00:18:35.209 [2024-11-04 10:18:40.856955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.868504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.868538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:35.209 [2024-11-04 10:18:40.868549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.466 ms 00:18:35.209 [2024-11-04 10:18:40.868556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.879689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.879722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:35.209 [2024-11-04 10:18:40.879732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.069 ms 00:18:35.209 [2024-11-04 10:18:40.879740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.880365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.880390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:35.209 [2024-11-04 10:18:40.880399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:18:35.209 [2024-11-04 10:18:40.880407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.934592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.209 [2024-11-04 10:18:40.934638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:35.209 [2024-11-04 10:18:40.934649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.163 ms 00:18:35.209 [2024-11-04 10:18:40.934657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.209 [2024-11-04 10:18:40.944793] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:35.562 [2024-11-04 10:18:40.957860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.957894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:35.562 [2024-11-04 10:18:40.957905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:18:35.562 [2024-11-04 10:18:40.957913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.957979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.957991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:35.562 [2024-11-04 10:18:40.957999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:35.562 [2024-11-04 10:18:40.958006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.958050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.958059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:35.562 [2024-11-04 10:18:40.958067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:35.562 [2024-11-04 10:18:40.958074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.958099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.958108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:35.562 [2024-11-04 10:18:40.958118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:35.562 [2024-11-04 10:18:40.958125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.958153] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:35.562 [2024-11-04 10:18:40.958162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.958169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:35.562 [2024-11-04 10:18:40.958177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:35.562 [2024-11-04 10:18:40.958184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.980697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.980733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:35.562 [2024-11-04 10:18:40.980744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.492 ms 00:18:35.562 [2024-11-04 10:18:40.980752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.980845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:40.980856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:35.562 [2024-11-04 10:18:40.980865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:35.562 [2024-11-04 10:18:40.980872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:40.981950] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.562 [2024-11-04 10:18:40.984845] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.957 ms, result 0 00:18:35.562 [2024-11-04 10:18:40.985562] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.562 [2024-11-04 10:18:40.998498] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:35.562  [2024-11-04T10:18:41.307Z] Copying: 4096/4096 [kB] (average 38 MBps)[2024-11-04 10:18:41.105304] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:35.562 [2024-11-04 10:18:41.113714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.113757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:35.562 [2024-11-04 10:18:41.113769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:35.562 [2024-11-04 10:18:41.113778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.113813] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:35.562 [2024-11-04 10:18:41.116310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.116346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:35.562 [2024-11-04 10:18:41.116355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.486 ms 00:18:35.562 [2024-11-04 10:18:41.116363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.118084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.118116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:35.562 [2024-11-04 10:18:41.118126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.691 ms 00:18:35.562 [2024-11-04 10:18:41.118133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.122110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.122136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:35.562 [2024-11-04 10:18:41.122150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.961 ms 00:18:35.562 [2024-11-04 10:18:41.122157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.129044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.129074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:35.562 [2024-11-04 10:18:41.129084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.861 ms 00:18:35.562 [2024-11-04 10:18:41.129092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.151732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.151765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:35.562 [2024-11-04 10:18:41.151776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.586 ms 00:18:35.562 [2024-11-04 10:18:41.151790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.165871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.165903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:35.562 [2024-11-04 10:18:41.165918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.046 ms 00:18:35.562 [2024-11-04 10:18:41.165928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.166311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.166348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:35.562 [2024-11-04 10:18:41.166360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:18:35.562 [2024-11-04 10:18:41.166368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.189570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.189604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:35.562 [2024-11-04 10:18:41.189615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.177 ms 00:18:35.562 [2024-11-04 10:18:41.189622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.212183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.212214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:35.562 [2024-11-04 10:18:41.212224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.529 ms 00:18:35.562 [2024-11-04 10:18:41.212231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.234401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.234435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:35.562 [2024-11-04 10:18:41.234445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.139 ms 00:18:35.562 [2024-11-04 10:18:41.234453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.257440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.562 [2024-11-04 10:18:41.257480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:35.562 [2024-11-04 10:18:41.257492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.576 ms 00:18:35.562 [2024-11-04 10:18:41.257500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.562 [2024-11-04 10:18:41.257533] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:35.562 [2024-11-04 10:18:41.257551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:35.562 [2024-11-04 10:18:41.257648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.257997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:35.563 [2024-11-04 10:18:41.258248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:35.564 [2024-11-04 10:18:41.258314] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:35.564 [2024-11-04 10:18:41.258322] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:35.564 [2024-11-04 10:18:41.258330] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:35.564 [2024-11-04 10:18:41.258337] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:35.564 [2024-11-04 10:18:41.258344] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:35.564 [2024-11-04 10:18:41.258351] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:35.564 [2024-11-04 10:18:41.258358] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:35.564 [2024-11-04 10:18:41.258365] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:35.564 [2024-11-04 10:18:41.258372] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:35.564 [2024-11-04 10:18:41.258379] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:35.564 [2024-11-04 10:18:41.258385] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:35.564 [2024-11-04 10:18:41.258392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.564 [2024-11-04 10:18:41.258400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:35.564 [2024-11-04 10:18:41.258410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.860 ms 00:18:35.564 [2024-11-04 10:18:41.258417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.564 [2024-11-04 10:18:41.270891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.564 [2024-11-04 10:18:41.270927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:35.564 [2024-11-04 10:18:41.270937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.457 ms 00:18:35.564 [2024-11-04 10:18:41.270945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.564 [2024-11-04 10:18:41.271286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:35.564 [2024-11-04 10:18:41.271309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:35.564 [2024-11-04 10:18:41.271318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.306 ms 00:18:35.564 [2024-11-04 10:18:41.271326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.306329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.306384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:35.839 [2024-11-04 10:18:41.306396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.306403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.306488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.306501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:35.839 [2024-11-04 10:18:41.306509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.306516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.306559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.306568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:35.839 [2024-11-04 10:18:41.306577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.306584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.306601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.306609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:35.839 [2024-11-04 10:18:41.306618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.306625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.384865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.384916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:35.839 [2024-11-04 10:18:41.384928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.384936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:35.839 [2024-11-04 10:18:41.448505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:35.839 [2024-11-04 10:18:41.448580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:35.839 [2024-11-04 10:18:41.448629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:35.839 [2024-11-04 10:18:41.448742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:35.839 [2024-11-04 10:18:41.448813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:35.839 [2024-11-04 10:18:41.448876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.448924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:35.839 [2024-11-04 10:18:41.448934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:35.839 [2024-11-04 10:18:41.448941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:35.839 [2024-11-04 10:18:41.448951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:35.839 [2024-11-04 10:18:41.449078] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 335.338 ms, result 0 00:18:36.784 00:18:36.784 00:18:36.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.784 10:18:42 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74097 00:18:36.784 10:18:42 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74097 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74097 ']' 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.784 10:18:42 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:36.784 10:18:42 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:36.784 [2024-11-04 10:18:42.249797] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:36.784 [2024-11-04 10:18:42.249906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74097 ] 00:18:36.784 [2024-11-04 10:18:42.400256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.784 [2024-11-04 10:18:42.505417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.724 10:18:43 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:37.724 10:18:43 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:37.724 10:18:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:37.724 [2024-11-04 10:18:43.336394] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:37.724 [2024-11-04 10:18:43.336450] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:37.985 [2024-11-04 10:18:43.506479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.506637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:37.985 [2024-11-04 10:18:43.506659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:37.985 [2024-11-04 10:18:43.506668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.509276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.509311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:37.985 [2024-11-04 10:18:43.509322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.586 ms 00:18:37.985 [2024-11-04 10:18:43.509329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.509400] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:37.985 [2024-11-04 10:18:43.510093] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:37.985 [2024-11-04 10:18:43.510114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.510122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:37.985 [2024-11-04 10:18:43.510132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.723 ms 00:18:37.985 [2024-11-04 10:18:43.510139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.511276] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:37.985 [2024-11-04 10:18:43.523515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.523556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:37.985 [2024-11-04 10:18:43.523568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.243 ms 00:18:37.985 [2024-11-04 10:18:43.523578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.523658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.523671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:37.985 [2024-11-04 10:18:43.523679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:18:37.985 [2024-11-04 10:18:43.523688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.528515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.528552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:37.985 [2024-11-04 10:18:43.528561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.783 ms 00:18:37.985 [2024-11-04 10:18:43.528571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.985 [2024-11-04 10:18:43.528661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.985 [2024-11-04 10:18:43.528673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:37.985 [2024-11-04 10:18:43.528680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:37.986 [2024-11-04 10:18:43.528689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.528712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.986 [2024-11-04 10:18:43.528725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:37.986 [2024-11-04 10:18:43.528733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:37.986 [2024-11-04 10:18:43.528741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.528762] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:37.986 [2024-11-04 10:18:43.531891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.986 [2024-11-04 10:18:43.531917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:37.986 [2024-11-04 10:18:43.531928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.131 ms 00:18:37.986 [2024-11-04 10:18:43.531935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.531970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.986 [2024-11-04 10:18:43.531978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:37.986 [2024-11-04 10:18:43.531987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:37.986 [2024-11-04 10:18:43.531995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.532014] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:37.986 [2024-11-04 10:18:43.532031] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:37.986 [2024-11-04 10:18:43.532071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:37.986 [2024-11-04 10:18:43.532085] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:37.986 [2024-11-04 10:18:43.532191] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:37.986 [2024-11-04 10:18:43.532201] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:37.986 [2024-11-04 10:18:43.532213] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:37.986 [2024-11-04 10:18:43.532222] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532234] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532243] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:37.986 [2024-11-04 10:18:43.532251] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:37.986 [2024-11-04 10:18:43.532259] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:37.986 [2024-11-04 10:18:43.532277] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:37.986 [2024-11-04 10:18:43.532284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.986 [2024-11-04 10:18:43.532293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:37.986 [2024-11-04 10:18:43.532301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:18:37.986 [2024-11-04 10:18:43.532309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.532398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.986 [2024-11-04 10:18:43.532407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:37.986 [2024-11-04 10:18:43.532416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:37.986 [2024-11-04 10:18:43.532425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.986 [2024-11-04 10:18:43.532523] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:37.986 [2024-11-04 10:18:43.532534] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:37.986 [2024-11-04 10:18:43.532541] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532550] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532557] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:37.986 [2024-11-04 10:18:43.532566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:37.986 [2024-11-04 10:18:43.532591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:37.986 [2024-11-04 10:18:43.532605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:37.986 [2024-11-04 10:18:43.532613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:37.986 [2024-11-04 10:18:43.532620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:37.986 [2024-11-04 10:18:43.532628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:37.986 [2024-11-04 10:18:43.532634] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:37.986 [2024-11-04 10:18:43.532642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532648] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:37.986 [2024-11-04 10:18:43.532656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:37.986 [2024-11-04 10:18:43.532682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:37.986 [2024-11-04 10:18:43.532705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:37.986 [2024-11-04 10:18:43.532727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532750] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:37.986 [2024-11-04 10:18:43.532758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532774] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:37.986 [2024-11-04 10:18:43.532798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:37.986 [2024-11-04 10:18:43.532814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:37.986 [2024-11-04 10:18:43.532821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:37.986 [2024-11-04 10:18:43.532828] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:37.986 [2024-11-04 10:18:43.532836] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:37.986 [2024-11-04 10:18:43.532843] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:37.986 [2024-11-04 10:18:43.532852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:37.986 [2024-11-04 10:18:43.532867] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:37.986 [2024-11-04 10:18:43.532874] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532882] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:37.986 [2024-11-04 10:18:43.532889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:37.986 [2024-11-04 10:18:43.532897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:37.986 [2024-11-04 10:18:43.532916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:37.986 [2024-11-04 10:18:43.532922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:37.986 [2024-11-04 10:18:43.532930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:37.986 [2024-11-04 10:18:43.532937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:37.986 [2024-11-04 10:18:43.532945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:37.986 [2024-11-04 10:18:43.532951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:37.986 [2024-11-04 10:18:43.532961] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:37.986 [2024-11-04 10:18:43.532969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:37.986 [2024-11-04 10:18:43.532981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:37.986 [2024-11-04 10:18:43.532989] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:37.986 [2024-11-04 10:18:43.532997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:37.986 [2024-11-04 10:18:43.533005] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:37.986 [2024-11-04 10:18:43.533014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:37.986 [2024-11-04 10:18:43.533021] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:37.986 [2024-11-04 10:18:43.533029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:37.986 [2024-11-04 10:18:43.533036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:37.986 [2024-11-04 10:18:43.533045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:37.986 [2024-11-04 10:18:43.533052] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:37.986 [2024-11-04 10:18:43.533061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:37.987 [2024-11-04 10:18:43.533068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:37.987 [2024-11-04 10:18:43.533076] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:37.987 [2024-11-04 10:18:43.533083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:37.987 [2024-11-04 10:18:43.533092] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:37.987 [2024-11-04 10:18:43.533100] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:37.987 [2024-11-04 10:18:43.533111] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:37.987 [2024-11-04 10:18:43.533118] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:37.987 [2024-11-04 10:18:43.533126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:37.987 [2024-11-04 10:18:43.533133] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:37.987 [2024-11-04 10:18:43.533142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.533148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:37.987 [2024-11-04 10:18:43.533157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.686 ms 00:18:37.987 [2024-11-04 10:18:43.533164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.558485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.558518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:37.987 [2024-11-04 10:18:43.558529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.255 ms 00:18:37.987 [2024-11-04 10:18:43.558537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.558654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.558663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:37.987 [2024-11-04 10:18:43.558672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:37.987 [2024-11-04 10:18:43.558679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.588613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.588642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:37.987 [2024-11-04 10:18:43.588657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.911 ms 00:18:37.987 [2024-11-04 10:18:43.588665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.588716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.588724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:37.987 [2024-11-04 10:18:43.588733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:37.987 [2024-11-04 10:18:43.588740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.589067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.589080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:37.987 [2024-11-04 10:18:43.589090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:18:37.987 [2024-11-04 10:18:43.589099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.589214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.589227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:37.987 [2024-11-04 10:18:43.589236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:18:37.987 [2024-11-04 10:18:43.589244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.603244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.603272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:37.987 [2024-11-04 10:18:43.603283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.979 ms 00:18:37.987 [2024-11-04 10:18:43.603291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.615488] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:37.987 [2024-11-04 10:18:43.615520] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:37.987 [2024-11-04 10:18:43.615532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.615539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:37.987 [2024-11-04 10:18:43.615549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.139 ms 00:18:37.987 [2024-11-04 10:18:43.615556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.639640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.639674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:37.987 [2024-11-04 10:18:43.639687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.016 ms 00:18:37.987 [2024-11-04 10:18:43.639695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.651104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.651134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:37.987 [2024-11-04 10:18:43.651148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.341 ms 00:18:37.987 [2024-11-04 10:18:43.651155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.662297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.662419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:37.987 [2024-11-04 10:18:43.662438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.079 ms 00:18:37.987 [2024-11-04 10:18:43.662445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:37.987 [2024-11-04 10:18:43.663058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:37.987 [2024-11-04 10:18:43.663076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:37.987 [2024-11-04 10:18:43.663087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.526 ms 00:18:37.987 [2024-11-04 10:18:43.663094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.728632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.728683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:38.246 [2024-11-04 10:18:43.728700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.513 ms 00:18:38.246 [2024-11-04 10:18:43.728708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.738884] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:38.246 [2024-11-04 10:18:43.752207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.752249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:38.246 [2024-11-04 10:18:43.752261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.382 ms 00:18:38.246 [2024-11-04 10:18:43.752287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.752357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.752368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:38.246 [2024-11-04 10:18:43.752377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:38.246 [2024-11-04 10:18:43.752386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.752432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.752442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:38.246 [2024-11-04 10:18:43.752450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:38.246 [2024-11-04 10:18:43.752459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.752483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.752493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:38.246 [2024-11-04 10:18:43.752501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:38.246 [2024-11-04 10:18:43.752512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.752540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:38.246 [2024-11-04 10:18:43.752553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.752560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:38.246 [2024-11-04 10:18:43.752571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:38.246 [2024-11-04 10:18:43.752579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.775954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.776088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:38.246 [2024-11-04 10:18:43.776109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.350 ms 00:18:38.246 [2024-11-04 10:18:43.776116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.776204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.246 [2024-11-04 10:18:43.776215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:38.246 [2024-11-04 10:18:43.776225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:18:38.246 [2024-11-04 10:18:43.776232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.246 [2024-11-04 10:18:43.777009] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:38.246 [2024-11-04 10:18:43.779890] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 270.242 ms, result 0 00:18:38.246 [2024-11-04 10:18:43.780948] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:38.246 Some configs were skipped because the RPC state that can call them passed over. 00:18:38.246 10:18:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:38.506 [2024-11-04 10:18:44.007504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.506 [2024-11-04 10:18:44.007676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:38.506 [2024-11-04 10:18:44.007740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.797 ms 00:18:38.506 [2024-11-04 10:18:44.007766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.506 [2024-11-04 10:18:44.007877] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.177 ms, result 0 00:18:38.506 true 00:18:38.506 10:18:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:38.506 [2024-11-04 10:18:44.207280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:38.506 [2024-11-04 10:18:44.207441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:38.506 [2024-11-04 10:18:44.207567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.285 ms 00:18:38.506 [2024-11-04 10:18:44.207598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:38.506 [2024-11-04 10:18:44.207697] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.708 ms, result 0 00:18:38.506 true 00:18:38.506 10:18:44 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74097 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74097 ']' 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74097 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74097 00:18:38.506 killing process with pid 74097 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74097' 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74097 00:18:38.506 10:18:44 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74097 00:18:39.445 [2024-11-04 10:18:44.922113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.445 [2024-11-04 10:18:44.922169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:39.445 [2024-11-04 10:18:44.922182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:39.445 [2024-11-04 10:18:44.922191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.445 [2024-11-04 10:18:44.922213] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:39.445 [2024-11-04 10:18:44.924866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.924899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:39.446 [2024-11-04 10:18:44.924913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.636 ms 00:18:39.446 [2024-11-04 10:18:44.924921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.925219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.925230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:39.446 [2024-11-04 10:18:44.925240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:18:39.446 [2024-11-04 10:18:44.925247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.929303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.929333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:39.446 [2024-11-04 10:18:44.929344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.035 ms 00:18:39.446 [2024-11-04 10:18:44.929353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.936240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.936427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:39.446 [2024-11-04 10:18:44.936449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:18:39.446 [2024-11-04 10:18:44.936457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.945943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.945973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:39.446 [2024-11-04 10:18:44.945987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.433 ms 00:18:39.446 [2024-11-04 10:18:44.945999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.953566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.953599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:39.446 [2024-11-04 10:18:44.953613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.527 ms 00:18:39.446 [2024-11-04 10:18:44.953622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.953757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.953767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:39.446 [2024-11-04 10:18:44.953777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:39.446 [2024-11-04 10:18:44.953807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.963307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.963429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:39.446 [2024-11-04 10:18:44.963447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.479 ms 00:18:39.446 [2024-11-04 10:18:44.963454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.972938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.972967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:39.446 [2024-11-04 10:18:44.972979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.449 ms 00:18:39.446 [2024-11-04 10:18:44.972986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.981733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.981762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:39.446 [2024-11-04 10:18:44.981772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.708 ms 00:18:39.446 [2024-11-04 10:18:44.981779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.990819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.446 [2024-11-04 10:18:44.990846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:39.446 [2024-11-04 10:18:44.990857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.952 ms 00:18:39.446 [2024-11-04 10:18:44.990863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.446 [2024-11-04 10:18:44.990898] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:39.446 [2024-11-04 10:18:44.990911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.990997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:39.446 [2024-11-04 10:18:44.991349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:39.447 [2024-11-04 10:18:44.991736] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:39.447 [2024-11-04 10:18:44.991747] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:39.447 [2024-11-04 10:18:44.991761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:39.447 [2024-11-04 10:18:44.991772] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:39.447 [2024-11-04 10:18:44.991778] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:39.447 [2024-11-04 10:18:44.991804] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:39.447 [2024-11-04 10:18:44.991811] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:39.447 [2024-11-04 10:18:44.991820] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:39.447 [2024-11-04 10:18:44.991827] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:39.447 [2024-11-04 10:18:44.991835] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:39.447 [2024-11-04 10:18:44.991841] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:39.447 [2024-11-04 10:18:44.991850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.447 [2024-11-04 10:18:44.991857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:39.447 [2024-11-04 10:18:44.991866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:18:39.447 [2024-11-04 10:18:44.991873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.004136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.447 [2024-11-04 10:18:45.004164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:39.447 [2024-11-04 10:18:45.004177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.228 ms 00:18:39.447 [2024-11-04 10:18:45.004184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.004554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.447 [2024-11-04 10:18:45.004564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:39.447 [2024-11-04 10:18:45.004574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:18:39.447 [2024-11-04 10:18:45.004583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.041256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.041363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:39.447 [2024-11-04 10:18:45.041379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.041385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.041467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.041474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:39.447 [2024-11-04 10:18:45.041482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.041489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.041523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.041530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:39.447 [2024-11-04 10:18:45.041539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.041544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.041559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.041564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:39.447 [2024-11-04 10:18:45.041572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.041577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.099985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.100117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:39.447 [2024-11-04 10:18:45.100133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.100139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.149967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.447 [2024-11-04 10:18:45.150005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:39.447 [2024-11-04 10:18:45.150018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.447 [2024-11-04 10:18:45.150024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.447 [2024-11-04 10:18:45.150988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.448 [2024-11-04 10:18:45.151111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.448 [2024-11-04 10:18:45.151161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.448 [2024-11-04 10:18:45.151260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:39.448 [2024-11-04 10:18:45.151305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.448 [2024-11-04 10:18:45.151358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:39.448 [2024-11-04 10:18:45.151405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.448 [2024-11-04 10:18:45.151412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:39.448 [2024-11-04 10:18:45.151418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.448 [2024-11-04 10:18:45.151521] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 229.393 ms, result 0 00:18:40.386 10:18:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:40.386 [2024-11-04 10:18:45.856656] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:40.386 [2024-11-04 10:18:45.856947] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74144 ] 00:18:40.386 [2024-11-04 10:18:46.018522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:40.386 [2024-11-04 10:18:46.115531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.647 [2024-11-04 10:18:46.366254] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.647 [2024-11-04 10:18:46.366309] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:40.909 [2024-11-04 10:18:46.520314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.520378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:40.909 [2024-11-04 10:18:46.520391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:40.909 [2024-11-04 10:18:46.520399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.523033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.523065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:40.909 [2024-11-04 10:18:46.523075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.616 ms 00:18:40.909 [2024-11-04 10:18:46.523082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.523147] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:40.909 [2024-11-04 10:18:46.523864] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:40.909 [2024-11-04 10:18:46.523890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.523898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:40.909 [2024-11-04 10:18:46.523906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:18:40.909 [2024-11-04 10:18:46.523913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.525031] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:40.909 [2024-11-04 10:18:46.537231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.537264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:40.909 [2024-11-04 10:18:46.537279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.201 ms 00:18:40.909 [2024-11-04 10:18:46.537287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.537367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.537378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:40.909 [2024-11-04 10:18:46.537387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:40.909 [2024-11-04 10:18:46.537394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.542122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.542154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:40.909 [2024-11-04 10:18:46.542163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.683 ms 00:18:40.909 [2024-11-04 10:18:46.542171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.542250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.542260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:40.909 [2024-11-04 10:18:46.542268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:40.909 [2024-11-04 10:18:46.542276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.542298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.542307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:40.909 [2024-11-04 10:18:46.542317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:40.909 [2024-11-04 10:18:46.542324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.542343] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:40.909 [2024-11-04 10:18:46.545583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.545608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:40.909 [2024-11-04 10:18:46.545617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:18:40.909 [2024-11-04 10:18:46.545624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.545657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.909 [2024-11-04 10:18:46.545665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:40.909 [2024-11-04 10:18:46.545673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:40.909 [2024-11-04 10:18:46.545680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.909 [2024-11-04 10:18:46.545696] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:40.909 [2024-11-04 10:18:46.545714] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:40.909 [2024-11-04 10:18:46.545749] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:40.910 [2024-11-04 10:18:46.545764] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:40.910 [2024-11-04 10:18:46.545879] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:40.910 [2024-11-04 10:18:46.545891] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:40.910 [2024-11-04 10:18:46.545901] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:40.910 [2024-11-04 10:18:46.545911] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:40.910 [2024-11-04 10:18:46.545920] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:40.910 [2024-11-04 10:18:46.545930] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:40.910 [2024-11-04 10:18:46.545937] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:40.910 [2024-11-04 10:18:46.545944] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:40.910 [2024-11-04 10:18:46.545951] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:40.910 [2024-11-04 10:18:46.545958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.910 [2024-11-04 10:18:46.545965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:40.910 [2024-11-04 10:18:46.545973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.263 ms 00:18:40.910 [2024-11-04 10:18:46.545980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.910 [2024-11-04 10:18:46.546066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.910 [2024-11-04 10:18:46.546074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:40.910 [2024-11-04 10:18:46.546082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:40.910 [2024-11-04 10:18:46.546091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.910 [2024-11-04 10:18:46.546187] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:40.910 [2024-11-04 10:18:46.546196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:40.910 [2024-11-04 10:18:46.546204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:40.910 [2024-11-04 10:18:46.546225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:40.910 [2024-11-04 10:18:46.546246] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.910 [2024-11-04 10:18:46.546259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:40.910 [2024-11-04 10:18:46.546265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:40.910 [2024-11-04 10:18:46.546271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:40.910 [2024-11-04 10:18:46.546284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:40.910 [2024-11-04 10:18:46.546290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:40.910 [2024-11-04 10:18:46.546296] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:40.910 [2024-11-04 10:18:46.546309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:40.910 [2024-11-04 10:18:46.546328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:40.910 [2024-11-04 10:18:46.546347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:40.910 [2024-11-04 10:18:46.546367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:40.910 [2024-11-04 10:18:46.546386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:40.910 [2024-11-04 10:18:46.546404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.910 [2024-11-04 10:18:46.546417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:40.910 [2024-11-04 10:18:46.546423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:40.910 [2024-11-04 10:18:46.546429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:40.910 [2024-11-04 10:18:46.546436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:40.910 [2024-11-04 10:18:46.546442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:40.910 [2024-11-04 10:18:46.546448] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546455] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:40.910 [2024-11-04 10:18:46.546461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:40.910 [2024-11-04 10:18:46.546468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546474] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:40.910 [2024-11-04 10:18:46.546482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:40.910 [2024-11-04 10:18:46.546489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:40.910 [2024-11-04 10:18:46.546504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:40.910 [2024-11-04 10:18:46.546511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:40.910 [2024-11-04 10:18:46.546518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:40.910 [2024-11-04 10:18:46.546524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:40.910 [2024-11-04 10:18:46.546530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:40.910 [2024-11-04 10:18:46.546537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:40.910 [2024-11-04 10:18:46.546545] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:40.910 [2024-11-04 10:18:46.546554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:40.910 [2024-11-04 10:18:46.546569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:40.910 [2024-11-04 10:18:46.546575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:40.910 [2024-11-04 10:18:46.546583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:40.910 [2024-11-04 10:18:46.546590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:40.910 [2024-11-04 10:18:46.546597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:40.910 [2024-11-04 10:18:46.546604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:40.910 [2024-11-04 10:18:46.546611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:40.910 [2024-11-04 10:18:46.546618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:40.910 [2024-11-04 10:18:46.546624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:40.910 [2024-11-04 10:18:46.546659] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:40.910 [2024-11-04 10:18:46.546667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:40.910 [2024-11-04 10:18:46.546682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:40.910 [2024-11-04 10:18:46.546689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:40.910 [2024-11-04 10:18:46.546696] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:40.910 [2024-11-04 10:18:46.546704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.910 [2024-11-04 10:18:46.546711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:40.910 [2024-11-04 10:18:46.546718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:18:40.910 [2024-11-04 10:18:46.546727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.910 [2024-11-04 10:18:46.572484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.910 [2024-11-04 10:18:46.572611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:40.911 [2024-11-04 10:18:46.572675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.686 ms 00:18:40.911 [2024-11-04 10:18:46.572699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.572846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.572875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:40.911 [2024-11-04 10:18:46.572994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:40.911 [2024-11-04 10:18:46.573016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.617497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.617628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:40.911 [2024-11-04 10:18:46.617985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.446 ms 00:18:40.911 [2024-11-04 10:18:46.618029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.618178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.618208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:40.911 [2024-11-04 10:18:46.618228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:40.911 [2024-11-04 10:18:46.618296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.618614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.618671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:40.911 [2024-11-04 10:18:46.618749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:18:40.911 [2024-11-04 10:18:46.618771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.618919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.618960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:40.911 [2024-11-04 10:18:46.619068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:18:40.911 [2024-11-04 10:18:46.619087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.632161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.632258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:40.911 [2024-11-04 10:18:46.632355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.044 ms 00:18:40.911 [2024-11-04 10:18:46.632377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:40.911 [2024-11-04 10:18:46.644495] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:40.911 [2024-11-04 10:18:46.644608] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:40.911 [2024-11-04 10:18:46.644664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:40.911 [2024-11-04 10:18:46.644684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:40.911 [2024-11-04 10:18:46.644703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.180 ms 00:18:40.911 [2024-11-04 10:18:46.644721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.668646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.668752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:41.170 [2024-11-04 10:18:46.668815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.804 ms 00:18:41.170 [2024-11-04 10:18:46.668838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.680162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.680256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:41.170 [2024-11-04 10:18:46.680308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.262 ms 00:18:41.170 [2024-11-04 10:18:46.680329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.691376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.691470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:41.170 [2024-11-04 10:18:46.691484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.982 ms 00:18:41.170 [2024-11-04 10:18:46.691491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.692119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.692133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:41.170 [2024-11-04 10:18:46.692143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:18:41.170 [2024-11-04 10:18:46.692149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.745569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.745716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:41.170 [2024-11-04 10:18:46.745732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.397 ms 00:18:41.170 [2024-11-04 10:18:46.745740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.755943] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:41.170 [2024-11-04 10:18:46.768970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.769004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:41.170 [2024-11-04 10:18:46.769015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.130 ms 00:18:41.170 [2024-11-04 10:18:46.769022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.769093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.769106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:41.170 [2024-11-04 10:18:46.769114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:41.170 [2024-11-04 10:18:46.769122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.769164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.769173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:41.170 [2024-11-04 10:18:46.769180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:18:41.170 [2024-11-04 10:18:46.769187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.769213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.769221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:41.170 [2024-11-04 10:18:46.769231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:41.170 [2024-11-04 10:18:46.769238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.769266] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:41.170 [2024-11-04 10:18:46.769274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.769281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:41.170 [2024-11-04 10:18:46.769289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:41.170 [2024-11-04 10:18:46.769296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.792446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.792481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:41.170 [2024-11-04 10:18:46.792493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.128 ms 00:18:41.170 [2024-11-04 10:18:46.792500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.792582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:41.170 [2024-11-04 10:18:46.792593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:41.170 [2024-11-04 10:18:46.792601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:41.170 [2024-11-04 10:18:46.792608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:41.170 [2024-11-04 10:18:46.793465] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:41.171 [2024-11-04 10:18:46.796411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.893 ms, result 0 00:18:41.171 [2024-11-04 10:18:46.797041] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:41.171 [2024-11-04 10:18:46.809792] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:42.553  [2024-11-04T10:18:48.878Z] Copying: 43/256 [MB] (43 MBps) [2024-11-04T10:18:50.266Z] Copying: 84/256 [MB] (40 MBps) [2024-11-04T10:18:51.206Z] Copying: 126/256 [MB] (41 MBps) [2024-11-04T10:18:52.149Z] Copying: 169/256 [MB] (43 MBps) [2024-11-04T10:18:53.092Z] Copying: 210/256 [MB] (41 MBps) [2024-11-04T10:18:53.092Z] Copying: 250/256 [MB] (39 MBps) [2024-11-04T10:18:53.666Z] Copying: 256/256 [MB] (average 41 MBps)[2024-11-04 10:18:53.385739] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:47.921 [2024-11-04 10:18:53.398054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.921 [2024-11-04 10:18:53.398219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:47.921 [2024-11-04 10:18:53.398239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:47.922 [2024-11-04 10:18:53.398250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.398278] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:47.922 [2024-11-04 10:18:53.401169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.401204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:47.922 [2024-11-04 10:18:53.401214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:18:47.922 [2024-11-04 10:18:53.401222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.401490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.401505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:47.922 [2024-11-04 10:18:53.401514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.244 ms 00:18:47.922 [2024-11-04 10:18:53.401521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.405212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.405232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:47.922 [2024-11-04 10:18:53.405245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.675 ms 00:18:47.922 [2024-11-04 10:18:53.405254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.412187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.412402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:47.922 [2024-11-04 10:18:53.412417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.917 ms 00:18:47.922 [2024-11-04 10:18:53.412424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.435381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.435484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:47.922 [2024-11-04 10:18:53.435498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.899 ms 00:18:47.922 [2024-11-04 10:18:53.435506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.448576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.448606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:47.922 [2024-11-04 10:18:53.448622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.048 ms 00:18:47.922 [2024-11-04 10:18:53.448630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.448749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.448757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:47.922 [2024-11-04 10:18:53.448766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:47.922 [2024-11-04 10:18:53.448773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.471891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.471919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:47.922 [2024-11-04 10:18:53.471929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.080 ms 00:18:47.922 [2024-11-04 10:18:53.471937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.494305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.494417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:47.922 [2024-11-04 10:18:53.494431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.348 ms 00:18:47.922 [2024-11-04 10:18:53.494438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.516233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.516262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:47.922 [2024-11-04 10:18:53.516276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.774 ms 00:18:47.922 [2024-11-04 10:18:53.516284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.538274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.922 [2024-11-04 10:18:53.538302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:47.922 [2024-11-04 10:18:53.538311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.945 ms 00:18:47.922 [2024-11-04 10:18:53.538318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.922 [2024-11-04 10:18:53.538337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:47.922 [2024-11-04 10:18:53.538353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:47.922 [2024-11-04 10:18:53.538736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.538995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:47.923 [2024-11-04 10:18:53.539134] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:47.923 [2024-11-04 10:18:53.539141] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: c6ad6e55-f413-4941-b6dd-4460bc0cd26d 00:18:47.923 [2024-11-04 10:18:53.539149] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:47.923 [2024-11-04 10:18:53.539156] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:47.923 [2024-11-04 10:18:53.539163] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:47.923 [2024-11-04 10:18:53.539170] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:47.923 [2024-11-04 10:18:53.539177] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:47.923 [2024-11-04 10:18:53.539185] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:47.923 [2024-11-04 10:18:53.539191] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:47.923 [2024-11-04 10:18:53.539198] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:47.923 [2024-11-04 10:18:53.539204] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:47.923 [2024-11-04 10:18:53.539211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.923 [2024-11-04 10:18:53.539218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:47.923 [2024-11-04 10:18:53.539226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.875 ms 00:18:47.923 [2024-11-04 10:18:53.539234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.551250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.923 [2024-11-04 10:18:53.551278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:47.923 [2024-11-04 10:18:53.551288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.000 ms 00:18:47.923 [2024-11-04 10:18:53.551296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.551640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:47.923 [2024-11-04 10:18:53.551652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:47.923 [2024-11-04 10:18:53.551660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:18:47.923 [2024-11-04 10:18:53.551667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.586098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.923 [2024-11-04 10:18:53.586129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:47.923 [2024-11-04 10:18:53.586140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.923 [2024-11-04 10:18:53.586147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.586230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.923 [2024-11-04 10:18:53.586240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:47.923 [2024-11-04 10:18:53.586248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.923 [2024-11-04 10:18:53.586255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.586296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.923 [2024-11-04 10:18:53.586305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:47.923 [2024-11-04 10:18:53.586313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.923 [2024-11-04 10:18:53.586319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:47.923 [2024-11-04 10:18:53.586336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:47.923 [2024-11-04 10:18:53.586343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:47.923 [2024-11-04 10:18:53.586353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:47.923 [2024-11-04 10:18:53.586360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.662760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.662824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.184 [2024-11-04 10:18:53.662836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.662844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.184 [2024-11-04 10:18:53.725564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.184 [2024-11-04 10:18:53.725656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.184 [2024-11-04 10:18:53.725706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.184 [2024-11-04 10:18:53.725843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:48.184 [2024-11-04 10:18:53.725897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.725941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.725949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.184 [2024-11-04 10:18:53.725957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.725964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.726003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:48.184 [2024-11-04 10:18:53.726012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.184 [2024-11-04 10:18:53.726020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:48.184 [2024-11-04 10:18:53.726027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.184 [2024-11-04 10:18:53.726154] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 328.103 ms, result 0 00:18:48.758 00:18:48.758 00:18:48.758 10:18:54 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:49.332 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:49.332 10:18:54 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:49.332 10:18:54 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:49.332 10:18:54 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:49.332 10:18:54 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:49.332 10:18:54 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:49.332 10:18:55 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:49.332 10:18:55 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74097 00:18:49.332 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74097 ']' 00:18:49.332 Process with pid 74097 is not found 00:18:49.332 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74097 00:18:49.332 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74097) - No such process 00:18:49.332 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74097 is not found' 00:18:49.332 00:18:49.332 real 1m0.155s 00:18:49.332 user 1m26.332s 00:18:49.332 sys 0m5.115s 00:18:49.332 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:49.332 ************************************ 00:18:49.332 END TEST ftl_trim 00:18:49.332 ************************************ 00:18:49.332 10:18:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:49.593 10:18:55 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:49.593 10:18:55 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:49.593 10:18:55 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:49.593 10:18:55 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:49.593 ************************************ 00:18:49.593 START TEST ftl_restore 00:18:49.593 ************************************ 00:18:49.593 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:49.593 * Looking for test storage... 00:18:49.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.593 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:49.593 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:18:49.593 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:49.593 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:49.593 10:18:55 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:49.594 10:18:55 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:49.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.594 --rc genhtml_branch_coverage=1 00:18:49.594 --rc genhtml_function_coverage=1 00:18:49.594 --rc genhtml_legend=1 00:18:49.594 --rc geninfo_all_blocks=1 00:18:49.594 --rc geninfo_unexecuted_blocks=1 00:18:49.594 00:18:49.594 ' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:49.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.594 --rc genhtml_branch_coverage=1 00:18:49.594 --rc genhtml_function_coverage=1 00:18:49.594 --rc genhtml_legend=1 00:18:49.594 --rc geninfo_all_blocks=1 00:18:49.594 --rc geninfo_unexecuted_blocks=1 00:18:49.594 00:18:49.594 ' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:49.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.594 --rc genhtml_branch_coverage=1 00:18:49.594 --rc genhtml_function_coverage=1 00:18:49.594 --rc genhtml_legend=1 00:18:49.594 --rc geninfo_all_blocks=1 00:18:49.594 --rc geninfo_unexecuted_blocks=1 00:18:49.594 00:18:49.594 ' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:49.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:49.594 --rc genhtml_branch_coverage=1 00:18:49.594 --rc genhtml_function_coverage=1 00:18:49.594 --rc genhtml_legend=1 00:18:49.594 --rc geninfo_all_blocks=1 00:18:49.594 --rc geninfo_unexecuted_blocks=1 00:18:49.594 00:18:49.594 ' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.dJ0gfbM7nj 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74308 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74308 00:18:49.594 10:18:55 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74308 ']' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:49.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:49.594 10:18:55 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:49.856 [2024-11-04 10:18:55.429357] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:18:49.856 [2024-11-04 10:18:55.429740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74308 ] 00:18:49.856 [2024-11-04 10:18:55.586387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.118 [2024-11-04 10:18:55.703374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.690 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:50.690 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:50.690 10:18:56 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:50.951 10:18:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:50.951 10:18:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:50.951 10:18:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:50.951 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:50.951 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:50.951 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:50.951 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:50.951 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:51.213 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:51.213 { 00:18:51.213 "name": "nvme0n1", 00:18:51.213 "aliases": [ 00:18:51.213 "822347e2-574b-46a4-a830-7dde1138ed5b" 00:18:51.213 ], 00:18:51.213 "product_name": "NVMe disk", 00:18:51.213 "block_size": 4096, 00:18:51.213 "num_blocks": 1310720, 00:18:51.213 "uuid": "822347e2-574b-46a4-a830-7dde1138ed5b", 00:18:51.213 "numa_id": -1, 00:18:51.213 "assigned_rate_limits": { 00:18:51.213 "rw_ios_per_sec": 0, 00:18:51.213 "rw_mbytes_per_sec": 0, 00:18:51.213 "r_mbytes_per_sec": 0, 00:18:51.213 "w_mbytes_per_sec": 0 00:18:51.213 }, 00:18:51.213 "claimed": true, 00:18:51.213 "claim_type": "read_many_write_one", 00:18:51.213 "zoned": false, 00:18:51.213 "supported_io_types": { 00:18:51.213 "read": true, 00:18:51.213 "write": true, 00:18:51.213 "unmap": true, 00:18:51.213 "flush": true, 00:18:51.213 "reset": true, 00:18:51.213 "nvme_admin": true, 00:18:51.213 "nvme_io": true, 00:18:51.213 "nvme_io_md": false, 00:18:51.213 "write_zeroes": true, 00:18:51.213 "zcopy": false, 00:18:51.213 "get_zone_info": false, 00:18:51.213 "zone_management": false, 00:18:51.213 "zone_append": false, 00:18:51.213 "compare": true, 00:18:51.213 "compare_and_write": false, 00:18:51.213 "abort": true, 00:18:51.213 "seek_hole": false, 00:18:51.213 "seek_data": false, 00:18:51.213 "copy": true, 00:18:51.213 "nvme_iov_md": false 00:18:51.213 }, 00:18:51.213 "driver_specific": { 00:18:51.213 "nvme": [ 00:18:51.213 { 00:18:51.213 "pci_address": "0000:00:11.0", 00:18:51.213 "trid": { 00:18:51.213 "trtype": "PCIe", 00:18:51.213 "traddr": "0000:00:11.0" 00:18:51.213 }, 00:18:51.213 "ctrlr_data": { 00:18:51.213 "cntlid": 0, 00:18:51.213 "vendor_id": "0x1b36", 00:18:51.213 "model_number": "QEMU NVMe Ctrl", 00:18:51.213 "serial_number": "12341", 00:18:51.213 "firmware_revision": "8.0.0", 00:18:51.213 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:51.213 "oacs": { 00:18:51.213 "security": 0, 00:18:51.213 "format": 1, 00:18:51.213 "firmware": 0, 00:18:51.213 "ns_manage": 1 00:18:51.213 }, 00:18:51.213 "multi_ctrlr": false, 00:18:51.213 "ana_reporting": false 00:18:51.213 }, 00:18:51.213 "vs": { 00:18:51.213 "nvme_version": "1.4" 00:18:51.213 }, 00:18:51.213 "ns_data": { 00:18:51.213 "id": 1, 00:18:51.213 "can_share": false 00:18:51.213 } 00:18:51.213 } 00:18:51.213 ], 00:18:51.213 "mp_policy": "active_passive" 00:18:51.213 } 00:18:51.213 } 00:18:51.213 ]' 00:18:51.213 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:51.213 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:51.213 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:51.472 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:51.472 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:51.472 10:18:56 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:18:51.472 10:18:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:51.472 10:18:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:51.472 10:18:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:51.472 10:18:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:51.472 10:18:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:51.472 10:18:57 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=a8e279a0-d73d-4fc0-bba6-a4e4834bdca7 00:18:51.472 10:18:57 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:51.472 10:18:57 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8e279a0-d73d-4fc0-bba6-a4e4834bdca7 00:18:51.730 10:18:57 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:51.987 10:18:57 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=81295366-4806-4a54-b482-b599e89088d9 00:18:51.987 10:18:57 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 81295366-4806-4a54-b482-b599e89088d9 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.245 10:18:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:52.246 10:18:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.246 10:18:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.246 10:18:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:52.246 10:18:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:52.246 10:18:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:52.246 10:18:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:52.504 { 00:18:52.504 "name": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:52.504 "aliases": [ 00:18:52.504 "lvs/nvme0n1p0" 00:18:52.504 ], 00:18:52.504 "product_name": "Logical Volume", 00:18:52.504 "block_size": 4096, 00:18:52.504 "num_blocks": 26476544, 00:18:52.504 "uuid": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:52.504 "assigned_rate_limits": { 00:18:52.504 "rw_ios_per_sec": 0, 00:18:52.504 "rw_mbytes_per_sec": 0, 00:18:52.504 "r_mbytes_per_sec": 0, 00:18:52.504 "w_mbytes_per_sec": 0 00:18:52.504 }, 00:18:52.504 "claimed": false, 00:18:52.504 "zoned": false, 00:18:52.504 "supported_io_types": { 00:18:52.504 "read": true, 00:18:52.504 "write": true, 00:18:52.504 "unmap": true, 00:18:52.504 "flush": false, 00:18:52.504 "reset": true, 00:18:52.504 "nvme_admin": false, 00:18:52.504 "nvme_io": false, 00:18:52.504 "nvme_io_md": false, 00:18:52.504 "write_zeroes": true, 00:18:52.504 "zcopy": false, 00:18:52.504 "get_zone_info": false, 00:18:52.504 "zone_management": false, 00:18:52.504 "zone_append": false, 00:18:52.504 "compare": false, 00:18:52.504 "compare_and_write": false, 00:18:52.504 "abort": false, 00:18:52.504 "seek_hole": true, 00:18:52.504 "seek_data": true, 00:18:52.504 "copy": false, 00:18:52.504 "nvme_iov_md": false 00:18:52.504 }, 00:18:52.504 "driver_specific": { 00:18:52.504 "lvol": { 00:18:52.504 "lvol_store_uuid": "81295366-4806-4a54-b482-b599e89088d9", 00:18:52.504 "base_bdev": "nvme0n1", 00:18:52.504 "thin_provision": true, 00:18:52.504 "num_allocated_clusters": 0, 00:18:52.504 "snapshot": false, 00:18:52.504 "clone": false, 00:18:52.504 "esnap_clone": false 00:18:52.504 } 00:18:52.504 } 00:18:52.504 } 00:18:52.504 ]' 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:52.504 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:52.504 10:18:58 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:52.504 10:18:58 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:52.504 10:18:58 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:52.763 10:18:58 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:52.763 10:18:58 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:52.763 10:18:58 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.763 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:52.763 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:52.763 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:52.763 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:52.763 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:53.024 { 00:18:53.024 "name": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:53.024 "aliases": [ 00:18:53.024 "lvs/nvme0n1p0" 00:18:53.024 ], 00:18:53.024 "product_name": "Logical Volume", 00:18:53.024 "block_size": 4096, 00:18:53.024 "num_blocks": 26476544, 00:18:53.024 "uuid": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:53.024 "assigned_rate_limits": { 00:18:53.024 "rw_ios_per_sec": 0, 00:18:53.024 "rw_mbytes_per_sec": 0, 00:18:53.024 "r_mbytes_per_sec": 0, 00:18:53.024 "w_mbytes_per_sec": 0 00:18:53.024 }, 00:18:53.024 "claimed": false, 00:18:53.024 "zoned": false, 00:18:53.024 "supported_io_types": { 00:18:53.024 "read": true, 00:18:53.024 "write": true, 00:18:53.024 "unmap": true, 00:18:53.024 "flush": false, 00:18:53.024 "reset": true, 00:18:53.024 "nvme_admin": false, 00:18:53.024 "nvme_io": false, 00:18:53.024 "nvme_io_md": false, 00:18:53.024 "write_zeroes": true, 00:18:53.024 "zcopy": false, 00:18:53.024 "get_zone_info": false, 00:18:53.024 "zone_management": false, 00:18:53.024 "zone_append": false, 00:18:53.024 "compare": false, 00:18:53.024 "compare_and_write": false, 00:18:53.024 "abort": false, 00:18:53.024 "seek_hole": true, 00:18:53.024 "seek_data": true, 00:18:53.024 "copy": false, 00:18:53.024 "nvme_iov_md": false 00:18:53.024 }, 00:18:53.024 "driver_specific": { 00:18:53.024 "lvol": { 00:18:53.024 "lvol_store_uuid": "81295366-4806-4a54-b482-b599e89088d9", 00:18:53.024 "base_bdev": "nvme0n1", 00:18:53.024 "thin_provision": true, 00:18:53.024 "num_allocated_clusters": 0, 00:18:53.024 "snapshot": false, 00:18:53.024 "clone": false, 00:18:53.024 "esnap_clone": false 00:18:53.024 } 00:18:53.024 } 00:18:53.024 } 00:18:53.024 ]' 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:53.024 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:53.024 10:18:58 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:53.024 10:18:58 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:53.286 10:18:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:53.286 10:18:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 32c05758-8c96-4bab-b0af-25fc06e21ab3 00:18:53.286 10:18:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:53.286 { 00:18:53.286 "name": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:53.286 "aliases": [ 00:18:53.286 "lvs/nvme0n1p0" 00:18:53.286 ], 00:18:53.286 "product_name": "Logical Volume", 00:18:53.286 "block_size": 4096, 00:18:53.286 "num_blocks": 26476544, 00:18:53.286 "uuid": "32c05758-8c96-4bab-b0af-25fc06e21ab3", 00:18:53.286 "assigned_rate_limits": { 00:18:53.286 "rw_ios_per_sec": 0, 00:18:53.286 "rw_mbytes_per_sec": 0, 00:18:53.286 "r_mbytes_per_sec": 0, 00:18:53.286 "w_mbytes_per_sec": 0 00:18:53.286 }, 00:18:53.286 "claimed": false, 00:18:53.286 "zoned": false, 00:18:53.286 "supported_io_types": { 00:18:53.286 "read": true, 00:18:53.286 "write": true, 00:18:53.286 "unmap": true, 00:18:53.286 "flush": false, 00:18:53.286 "reset": true, 00:18:53.286 "nvme_admin": false, 00:18:53.286 "nvme_io": false, 00:18:53.286 "nvme_io_md": false, 00:18:53.286 "write_zeroes": true, 00:18:53.286 "zcopy": false, 00:18:53.286 "get_zone_info": false, 00:18:53.286 "zone_management": false, 00:18:53.286 "zone_append": false, 00:18:53.286 "compare": false, 00:18:53.286 "compare_and_write": false, 00:18:53.286 "abort": false, 00:18:53.286 "seek_hole": true, 00:18:53.286 "seek_data": true, 00:18:53.286 "copy": false, 00:18:53.286 "nvme_iov_md": false 00:18:53.286 }, 00:18:53.286 "driver_specific": { 00:18:53.286 "lvol": { 00:18:53.286 "lvol_store_uuid": "81295366-4806-4a54-b482-b599e89088d9", 00:18:53.286 "base_bdev": "nvme0n1", 00:18:53.286 "thin_provision": true, 00:18:53.286 "num_allocated_clusters": 0, 00:18:53.286 "snapshot": false, 00:18:53.286 "clone": false, 00:18:53.286 "esnap_clone": false 00:18:53.286 } 00:18:53.286 } 00:18:53.286 } 00:18:53.286 ]' 00:18:53.286 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:53.549 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:53.549 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:53.549 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:53.549 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:53.549 10:18:59 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 32c05758-8c96-4bab-b0af-25fc06e21ab3 --l2p_dram_limit 10' 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:53.549 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:53.549 10:18:59 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 32c05758-8c96-4bab-b0af-25fc06e21ab3 --l2p_dram_limit 10 -c nvc0n1p0 00:18:53.549 [2024-11-04 10:18:59.244720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.244760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.549 [2024-11-04 10:18:59.244774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:53.549 [2024-11-04 10:18:59.244796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.244844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.244852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.549 [2024-11-04 10:18:59.244860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:18:53.549 [2024-11-04 10:18:59.244866] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.244886] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.549 [2024-11-04 10:18:59.245451] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.549 [2024-11-04 10:18:59.245469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.245475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.549 [2024-11-04 10:18:59.245483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.589 ms 00:18:53.549 [2024-11-04 10:18:59.245489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.245544] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:18:53.549 [2024-11-04 10:18:59.246484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.246502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:53.549 [2024-11-04 10:18:59.246509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:53.549 [2024-11-04 10:18:59.246518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.251288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.251381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.549 [2024-11-04 10:18:59.251439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.738 ms 00:18:53.549 [2024-11-04 10:18:59.251460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.251538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.251634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.549 [2024-11-04 10:18:59.251697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:53.549 [2024-11-04 10:18:59.251716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.251756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.251775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.549 [2024-11-04 10:18:59.251800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:53.549 [2024-11-04 10:18:59.251817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.251853] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:53.549 [2024-11-04 10:18:59.254746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.254842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.549 [2024-11-04 10:18:59.254888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.897 ms 00:18:53.549 [2024-11-04 10:18:59.254909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.254947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.254990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.549 [2024-11-04 10:18:59.255009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:53.549 [2024-11-04 10:18:59.255024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.255075] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:53.549 [2024-11-04 10:18:59.255193] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.549 [2024-11-04 10:18:59.255274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.549 [2024-11-04 10:18:59.255300] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.549 [2024-11-04 10:18:59.255326] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.549 [2024-11-04 10:18:59.255349] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.549 [2024-11-04 10:18:59.255372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:53.549 [2024-11-04 10:18:59.255387] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.549 [2024-11-04 10:18:59.255436] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.549 [2024-11-04 10:18:59.255453] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.549 [2024-11-04 10:18:59.255472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.255486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.549 [2024-11-04 10:18:59.255503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:18:53.549 [2024-11-04 10:18:59.255523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.255600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.549 [2024-11-04 10:18:59.255650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.549 [2024-11-04 10:18:59.255666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:53.549 [2024-11-04 10:18:59.255680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.549 [2024-11-04 10:18:59.255776] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.549 [2024-11-04 10:18:59.255841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.549 [2024-11-04 10:18:59.255858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.549 [2024-11-04 10:18:59.255873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.549 [2024-11-04 10:18:59.255916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.549 [2024-11-04 10:18:59.255933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.549 [2024-11-04 10:18:59.255949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:53.549 [2024-11-04 10:18:59.255964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.549 [2024-11-04 10:18:59.255980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:53.549 [2024-11-04 10:18:59.255994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.549 [2024-11-04 10:18:59.256009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.549 [2024-11-04 10:18:59.256054] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:53.549 [2024-11-04 10:18:59.256072] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.549 [2024-11-04 10:18:59.256087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.549 [2024-11-04 10:18:59.256103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:53.549 [2024-11-04 10:18:59.256117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.549 [2024-11-04 10:18:59.256134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.549 [2024-11-04 10:18:59.256147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:53.549 [2024-11-04 10:18:59.256192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.549 [2024-11-04 10:18:59.256209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.550 [2024-11-04 10:18:59.256227] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.550 [2024-11-04 10:18:59.256278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256295] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.550 [2024-11-04 10:18:59.256351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.550 [2024-11-04 10:18:59.256397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256412] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.550 [2024-11-04 10:18:59.256442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.550 [2024-11-04 10:18:59.256494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.550 [2024-11-04 10:18:59.256511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:53.550 [2024-11-04 10:18:59.256519] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.550 [2024-11-04 10:18:59.256524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.550 [2024-11-04 10:18:59.256530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:53.550 [2024-11-04 10:18:59.256537] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.550 [2024-11-04 10:18:59.256548] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:53.550 [2024-11-04 10:18:59.256555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256560] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.550 [2024-11-04 10:18:59.256567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.550 [2024-11-04 10:18:59.256573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.550 [2024-11-04 10:18:59.256586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.550 [2024-11-04 10:18:59.256595] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.550 [2024-11-04 10:18:59.256600] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.550 [2024-11-04 10:18:59.256606] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.550 [2024-11-04 10:18:59.256611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.550 [2024-11-04 10:18:59.256618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.550 [2024-11-04 10:18:59.256626] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.550 [2024-11-04 10:18:59.256635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:53.550 [2024-11-04 10:18:59.256648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:53.550 [2024-11-04 10:18:59.256654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:53.550 [2024-11-04 10:18:59.256661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:53.550 [2024-11-04 10:18:59.256666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:53.550 [2024-11-04 10:18:59.256673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:53.550 [2024-11-04 10:18:59.256678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:53.550 [2024-11-04 10:18:59.256685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:53.550 [2024-11-04 10:18:59.256690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:53.550 [2024-11-04 10:18:59.256698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256703] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256711] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256723] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:53.550 [2024-11-04 10:18:59.256728] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.550 [2024-11-04 10:18:59.256736] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256744] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.550 [2024-11-04 10:18:59.256751] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.550 [2024-11-04 10:18:59.256757] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.550 [2024-11-04 10:18:59.256764] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.550 [2024-11-04 10:18:59.256770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.550 [2024-11-04 10:18:59.256777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.550 [2024-11-04 10:18:59.256798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:18:53.550 [2024-11-04 10:18:59.256806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.550 [2024-11-04 10:18:59.256839] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:53.550 [2024-11-04 10:18:59.256849] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:57.771 [2024-11-04 10:19:02.983723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:02.983810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:57.771 [2024-11-04 10:19:02.983827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3726.868 ms 00:18:57.771 [2024-11-04 10:19:02.983838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.008477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.008537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.771 [2024-11-04 10:19:03.008549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.433 ms 00:18:57.771 [2024-11-04 10:19:03.008559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.008680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.008691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:57.771 [2024-11-04 10:19:03.008698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:18:57.771 [2024-11-04 10:19:03.008709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.035100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.035137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.771 [2024-11-04 10:19:03.035147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.364 ms 00:18:57.771 [2024-11-04 10:19:03.035156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.035185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.035194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.771 [2024-11-04 10:19:03.035201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:18:57.771 [2024-11-04 10:19:03.035211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.035603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.035619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.771 [2024-11-04 10:19:03.035627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:18:57.771 [2024-11-04 10:19:03.035634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.035725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.035734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.771 [2024-11-04 10:19:03.035740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:57.771 [2024-11-04 10:19:03.035750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.048024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.048055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.771 [2024-11-04 10:19:03.048063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.256 ms 00:18:57.771 [2024-11-04 10:19:03.048073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.057344] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:57.771 [2024-11-04 10:19:03.059952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.059979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:57.771 [2024-11-04 10:19:03.059990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.798 ms 00:18:57.771 [2024-11-04 10:19:03.059996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.137168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.137208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:57.771 [2024-11-04 10:19:03.137221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 77.145 ms 00:18:57.771 [2024-11-04 10:19:03.137228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.137370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.137378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:57.771 [2024-11-04 10:19:03.137389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:18:57.771 [2024-11-04 10:19:03.137396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.154954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.154982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:57.771 [2024-11-04 10:19:03.154993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.531 ms 00:18:57.771 [2024-11-04 10:19:03.154999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.172205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.172230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:57.771 [2024-11-04 10:19:03.172240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.174 ms 00:18:57.771 [2024-11-04 10:19:03.172246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.172698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.172710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:57.771 [2024-11-04 10:19:03.172718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:18:57.771 [2024-11-04 10:19:03.172723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.232612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.232641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:57.771 [2024-11-04 10:19:03.232655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.860 ms 00:18:57.771 [2024-11-04 10:19:03.232661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.251115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.251141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:57.771 [2024-11-04 10:19:03.251153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.395 ms 00:18:57.771 [2024-11-04 10:19:03.251160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.268991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.269016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:57.771 [2024-11-04 10:19:03.269026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.802 ms 00:18:57.771 [2024-11-04 10:19:03.269032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.286504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.286613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:57.771 [2024-11-04 10:19:03.286630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.443 ms 00:18:57.771 [2024-11-04 10:19:03.286636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.286665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.286672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:57.771 [2024-11-04 10:19:03.286682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:57.771 [2024-11-04 10:19:03.286687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.286749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.771 [2024-11-04 10:19:03.286756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:57.771 [2024-11-04 10:19:03.286764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:57.771 [2024-11-04 10:19:03.286770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.771 [2024-11-04 10:19:03.287481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4042.434 ms, result 0 00:18:57.771 { 00:18:57.771 "name": "ftl0", 00:18:57.771 "uuid": "29f73b09-5fd3-453f-95e2-8e762e97d9e7" 00:18:57.771 } 00:18:57.771 10:19:03 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:57.771 10:19:03 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:57.771 10:19:03 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:58.033 10:19:03 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:58.033 [2024-11-04 10:19:03.655106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.655146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:58.033 [2024-11-04 10:19:03.655156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:58.033 [2024-11-04 10:19:03.655169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.655188] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:58.033 [2024-11-04 10:19:03.657296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.657406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:58.033 [2024-11-04 10:19:03.657422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.093 ms 00:18:58.033 [2024-11-04 10:19:03.657429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.657633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.657641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:58.033 [2024-11-04 10:19:03.657649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:18:58.033 [2024-11-04 10:19:03.657656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.660103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.660119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:58.033 [2024-11-04 10:19:03.660128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.434 ms 00:18:58.033 [2024-11-04 10:19:03.660133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.664839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.664863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:58.033 [2024-11-04 10:19:03.664872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.689 ms 00:18:58.033 [2024-11-04 10:19:03.664878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.683460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.683496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:58.033 [2024-11-04 10:19:03.683506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.523 ms 00:18:58.033 [2024-11-04 10:19:03.683512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.697515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.697545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:58.033 [2024-11-04 10:19:03.697556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.970 ms 00:18:58.033 [2024-11-04 10:19:03.697562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.697677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.697685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:58.033 [2024-11-04 10:19:03.697694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:58.033 [2024-11-04 10:19:03.697699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.715707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.715816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:58.033 [2024-11-04 10:19:03.715832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.994 ms 00:18:58.033 [2024-11-04 10:19:03.715837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.732949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.732974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:58.033 [2024-11-04 10:19:03.732983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.084 ms 00:18:58.033 [2024-11-04 10:19:03.732989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.750178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.750203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:58.033 [2024-11-04 10:19:03.750212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.158 ms 00:18:58.033 [2024-11-04 10:19:03.750218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.766902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.033 [2024-11-04 10:19:03.766992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:58.033 [2024-11-04 10:19:03.767007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.626 ms 00:18:58.033 [2024-11-04 10:19:03.767012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.033 [2024-11-04 10:19:03.767037] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:58.033 [2024-11-04 10:19:03.767047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:58.033 [2024-11-04 10:19:03.767089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:58.034 [2024-11-04 10:19:03.767552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:58.035 [2024-11-04 10:19:03.767712] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:58.035 [2024-11-04 10:19:03.767719] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:18:58.035 [2024-11-04 10:19:03.767726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:58.035 [2024-11-04 10:19:03.767735] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:58.035 [2024-11-04 10:19:03.767740] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:58.035 [2024-11-04 10:19:03.767747] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:58.035 [2024-11-04 10:19:03.767754] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:58.035 [2024-11-04 10:19:03.767761] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:58.035 [2024-11-04 10:19:03.767767] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:58.035 [2024-11-04 10:19:03.767773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:58.035 [2024-11-04 10:19:03.767777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:58.035 [2024-11-04 10:19:03.767798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.035 [2024-11-04 10:19:03.767804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:58.035 [2024-11-04 10:19:03.767812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.762 ms 00:18:58.035 [2024-11-04 10:19:03.767818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.777409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.298 [2024-11-04 10:19:03.777433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:58.298 [2024-11-04 10:19:03.777442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.566 ms 00:18:58.298 [2024-11-04 10:19:03.777448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.777715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:58.298 [2024-11-04 10:19:03.777725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:58.298 [2024-11-04 10:19:03.777733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:18:58.298 [2024-11-04 10:19:03.777739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.810338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.810456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:58.298 [2024-11-04 10:19:03.810471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.810478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.810527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.810534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:58.298 [2024-11-04 10:19:03.810541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.810547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.810609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.810617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:58.298 [2024-11-04 10:19:03.810624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.810630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.810646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.810652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:58.298 [2024-11-04 10:19:03.810659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.810665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.870488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.870528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:58.298 [2024-11-04 10:19:03.870539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.870545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:58.298 [2024-11-04 10:19:03.919487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:58.298 [2024-11-04 10:19:03.919588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:58.298 [2024-11-04 10:19:03.919646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:58.298 [2024-11-04 10:19:03.919739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:58.298 [2024-11-04 10:19:03.919805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:58.298 [2024-11-04 10:19:03.919858] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.919901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:58.298 [2024-11-04 10:19:03.919908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:58.298 [2024-11-04 10:19:03.919915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:58.298 [2024-11-04 10:19:03.919921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:58.298 [2024-11-04 10:19:03.920025] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 264.891 ms, result 0 00:18:58.298 true 00:18:58.298 10:19:03 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74308 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74308 ']' 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74308 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74308 00:18:58.298 killing process with pid 74308 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74308' 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74308 00:18:58.298 10:19:03 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74308 00:19:04.885 10:19:09 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:19:07.475 262144+0 records in 00:19:07.475 262144+0 records out 00:19:07.475 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.27438 s, 328 MB/s 00:19:07.475 10:19:12 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:19:09.383 10:19:15 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:09.383 [2024-11-04 10:19:15.068769] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:19:09.384 [2024-11-04 10:19:15.068873] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74527 ] 00:19:09.645 [2024-11-04 10:19:15.218237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.645 [2024-11-04 10:19:15.295344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:09.906 [2024-11-04 10:19:15.501161] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:09.906 [2024-11-04 10:19:15.501340] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:10.168 [2024-11-04 10:19:15.655893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.168 [2024-11-04 10:19:15.656057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:10.168 [2024-11-04 10:19:15.656113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.168 [2024-11-04 10:19:15.656132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.168 [2024-11-04 10:19:15.656185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.168 [2024-11-04 10:19:15.656205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:10.168 [2024-11-04 10:19:15.656222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:10.168 [2024-11-04 10:19:15.656236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.168 [2024-11-04 10:19:15.656261] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:10.168 [2024-11-04 10:19:15.656815] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:10.168 [2024-11-04 10:19:15.656900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.168 [2024-11-04 10:19:15.656943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:10.168 [2024-11-04 10:19:15.656963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.643 ms 00:19:10.168 [2024-11-04 10:19:15.656979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.168 [2024-11-04 10:19:15.658069] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:10.168 [2024-11-04 10:19:15.668006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.168 [2024-11-04 10:19:15.668106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:10.169 [2024-11-04 10:19:15.668154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.939 ms 00:19:10.169 [2024-11-04 10:19:15.668163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.668205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.668214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:10.169 [2024-11-04 10:19:15.668221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:10.169 [2024-11-04 10:19:15.668227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.672691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.672717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:10.169 [2024-11-04 10:19:15.672725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.417 ms 00:19:10.169 [2024-11-04 10:19:15.672731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.672801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.672809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:10.169 [2024-11-04 10:19:15.672815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:10.169 [2024-11-04 10:19:15.672821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.672864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.672871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:10.169 [2024-11-04 10:19:15.672881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:10.169 [2024-11-04 10:19:15.672887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.672907] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:10.169 [2024-11-04 10:19:15.675509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.675596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:10.169 [2024-11-04 10:19:15.675608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:19:10.169 [2024-11-04 10:19:15.675617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.675641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.675647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:10.169 [2024-11-04 10:19:15.675653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:10.169 [2024-11-04 10:19:15.675659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.675674] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:10.169 [2024-11-04 10:19:15.675688] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:10.169 [2024-11-04 10:19:15.675715] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:10.169 [2024-11-04 10:19:15.675729] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:10.169 [2024-11-04 10:19:15.675823] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:10.169 [2024-11-04 10:19:15.675832] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:10.169 [2024-11-04 10:19:15.675841] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:10.169 [2024-11-04 10:19:15.675849] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:10.169 [2024-11-04 10:19:15.675856] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:10.169 [2024-11-04 10:19:15.675862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:10.169 [2024-11-04 10:19:15.675868] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:10.169 [2024-11-04 10:19:15.675874] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:10.169 [2024-11-04 10:19:15.675879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:10.169 [2024-11-04 10:19:15.675887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.675893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:10.169 [2024-11-04 10:19:15.675898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:19:10.169 [2024-11-04 10:19:15.675904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.675966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.169 [2024-11-04 10:19:15.675973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:10.169 [2024-11-04 10:19:15.675979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:10.169 [2024-11-04 10:19:15.675984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.169 [2024-11-04 10:19:15.676059] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:10.169 [2024-11-04 10:19:15.676068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:10.169 [2024-11-04 10:19:15.676075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:10.169 [2024-11-04 10:19:15.676092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:10.169 [2024-11-04 10:19:15.676109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.169 [2024-11-04 10:19:15.676119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:10.169 [2024-11-04 10:19:15.676125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:10.169 [2024-11-04 10:19:15.676131] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:10.169 [2024-11-04 10:19:15.676136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:10.169 [2024-11-04 10:19:15.676142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:10.169 [2024-11-04 10:19:15.676151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:10.169 [2024-11-04 10:19:15.676161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676172] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:10.169 [2024-11-04 10:19:15.676176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:10.169 [2024-11-04 10:19:15.676191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676196] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676201] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:10.169 [2024-11-04 10:19:15.676206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676216] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:10.169 [2024-11-04 10:19:15.676221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:10.169 [2024-11-04 10:19:15.676235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.169 [2024-11-04 10:19:15.676246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:10.169 [2024-11-04 10:19:15.676251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:10.169 [2024-11-04 10:19:15.676255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:10.169 [2024-11-04 10:19:15.676260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:10.169 [2024-11-04 10:19:15.676265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:10.169 [2024-11-04 10:19:15.676270] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:10.169 [2024-11-04 10:19:15.676287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:10.169 [2024-11-04 10:19:15.676292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676299] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:10.169 [2024-11-04 10:19:15.676306] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:10.169 [2024-11-04 10:19:15.676311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:10.169 [2024-11-04 10:19:15.676323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:10.169 [2024-11-04 10:19:15.676328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:10.169 [2024-11-04 10:19:15.676333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:10.169 [2024-11-04 10:19:15.676338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:10.169 [2024-11-04 10:19:15.676344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:10.169 [2024-11-04 10:19:15.676349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:10.169 [2024-11-04 10:19:15.676355] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:10.169 [2024-11-04 10:19:15.676362] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.169 [2024-11-04 10:19:15.676369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:10.170 [2024-11-04 10:19:15.676374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:10.170 [2024-11-04 10:19:15.676380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:10.170 [2024-11-04 10:19:15.676385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:10.170 [2024-11-04 10:19:15.676391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:10.170 [2024-11-04 10:19:15.676396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:10.170 [2024-11-04 10:19:15.676402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:10.170 [2024-11-04 10:19:15.676407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:10.170 [2024-11-04 10:19:15.676412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:10.170 [2024-11-04 10:19:15.676418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:10.170 [2024-11-04 10:19:15.676444] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:10.170 [2024-11-04 10:19:15.676450] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676458] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:10.170 [2024-11-04 10:19:15.676463] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:10.170 [2024-11-04 10:19:15.676469] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:10.170 [2024-11-04 10:19:15.676474] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:10.170 [2024-11-04 10:19:15.676481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.676487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:10.170 [2024-11-04 10:19:15.676492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.474 ms 00:19:10.170 [2024-11-04 10:19:15.676498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.698019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.698050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:10.170 [2024-11-04 10:19:15.698059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.489 ms 00:19:10.170 [2024-11-04 10:19:15.698066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.698134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.698144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:10.170 [2024-11-04 10:19:15.698150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:19:10.170 [2024-11-04 10:19:15.698156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.733876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.733909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:10.170 [2024-11-04 10:19:15.733919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.673 ms 00:19:10.170 [2024-11-04 10:19:15.733925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.733962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.733969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:10.170 [2024-11-04 10:19:15.733976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:10.170 [2024-11-04 10:19:15.733984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.734302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.734316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:10.170 [2024-11-04 10:19:15.734323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:10.170 [2024-11-04 10:19:15.734329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.734432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.734439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:10.170 [2024-11-04 10:19:15.734446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:19:10.170 [2024-11-04 10:19:15.734452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.745131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.745156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:10.170 [2024-11-04 10:19:15.745165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.659 ms 00:19:10.170 [2024-11-04 10:19:15.745173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.754822] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:10.170 [2024-11-04 10:19:15.754850] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:10.170 [2024-11-04 10:19:15.754860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.754866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:10.170 [2024-11-04 10:19:15.754873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:19:10.170 [2024-11-04 10:19:15.754879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.773462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.773489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:10.170 [2024-11-04 10:19:15.773502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.550 ms 00:19:10.170 [2024-11-04 10:19:15.773508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.782295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.782326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:10.170 [2024-11-04 10:19:15.782333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.753 ms 00:19:10.170 [2024-11-04 10:19:15.782338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.790803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.790827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:10.170 [2024-11-04 10:19:15.790834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.439 ms 00:19:10.170 [2024-11-04 10:19:15.790840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.791290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.791304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:10.170 [2024-11-04 10:19:15.791311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:19:10.170 [2024-11-04 10:19:15.791317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.834791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.834845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:10.170 [2024-11-04 10:19:15.834856] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.460 ms 00:19:10.170 [2024-11-04 10:19:15.834862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.842747] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:10.170 [2024-11-04 10:19:15.844650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.844760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:10.170 [2024-11-04 10:19:15.844774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.751 ms 00:19:10.170 [2024-11-04 10:19:15.844789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.844849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.844858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:10.170 [2024-11-04 10:19:15.844866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:10.170 [2024-11-04 10:19:15.844873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.844916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.844925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:10.170 [2024-11-04 10:19:15.844933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:19:10.170 [2024-11-04 10:19:15.844940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.844956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.844963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:10.170 [2024-11-04 10:19:15.844969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:10.170 [2024-11-04 10:19:15.844975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.845007] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:10.170 [2024-11-04 10:19:15.845015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.845021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:10.170 [2024-11-04 10:19:15.845029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:10.170 [2024-11-04 10:19:15.845036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.862759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.862798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:10.170 [2024-11-04 10:19:15.862808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.710 ms 00:19:10.170 [2024-11-04 10:19:15.862828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.170 [2024-11-04 10:19:15.862888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:10.170 [2024-11-04 10:19:15.862896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:10.170 [2024-11-04 10:19:15.862903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:10.171 [2024-11-04 10:19:15.862909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:10.171 [2024-11-04 10:19:15.863675] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 207.448 ms, result 0 00:19:11.556  [2024-11-04T10:19:18.244Z] Copying: 30/1024 [MB] (30 MBps) [2024-11-04T10:19:19.189Z] Copying: 49/1024 [MB] (19 MBps) [2024-11-04T10:19:20.144Z] Copying: 65/1024 [MB] (15 MBps) [2024-11-04T10:19:21.087Z] Copying: 82/1024 [MB] (17 MBps) [2024-11-04T10:19:22.031Z] Copying: 106/1024 [MB] (23 MBps) [2024-11-04T10:19:22.971Z] Copying: 123/1024 [MB] (17 MBps) [2024-11-04T10:19:23.911Z] Copying: 166/1024 [MB] (42 MBps) [2024-11-04T10:19:25.284Z] Copying: 207/1024 [MB] (40 MBps) [2024-11-04T10:19:25.907Z] Copying: 252/1024 [MB] (45 MBps) [2024-11-04T10:19:27.281Z] Copying: 298/1024 [MB] (45 MBps) [2024-11-04T10:19:28.214Z] Copying: 340/1024 [MB] (42 MBps) [2024-11-04T10:19:29.148Z] Copying: 386/1024 [MB] (45 MBps) [2024-11-04T10:19:30.115Z] Copying: 440/1024 [MB] (54 MBps) [2024-11-04T10:19:31.048Z] Copying: 474/1024 [MB] (34 MBps) [2024-11-04T10:19:31.991Z] Copying: 507/1024 [MB] (33 MBps) [2024-11-04T10:19:32.932Z] Copying: 536/1024 [MB] (29 MBps) [2024-11-04T10:19:34.320Z] Copying: 565/1024 [MB] (29 MBps) [2024-11-04T10:19:34.889Z] Copying: 590/1024 [MB] (24 MBps) [2024-11-04T10:19:36.267Z] Copying: 614/1024 [MB] (24 MBps) [2024-11-04T10:19:37.201Z] Copying: 634/1024 [MB] (19 MBps) [2024-11-04T10:19:38.134Z] Copying: 653/1024 [MB] (18 MBps) [2024-11-04T10:19:39.096Z] Copying: 671/1024 [MB] (17 MBps) [2024-11-04T10:19:40.030Z] Copying: 692/1024 [MB] (21 MBps) [2024-11-04T10:19:40.964Z] Copying: 715/1024 [MB] (22 MBps) [2024-11-04T10:19:41.899Z] Copying: 730/1024 [MB] (15 MBps) [2024-11-04T10:19:43.274Z] Copying: 748/1024 [MB] (17 MBps) [2024-11-04T10:19:44.208Z] Copying: 776264/1048576 [kB] (9616 kBps) [2024-11-04T10:19:45.140Z] Copying: 785976/1048576 [kB] (9712 kBps) [2024-11-04T10:19:46.073Z] Copying: 795388/1048576 [kB] (9412 kBps) [2024-11-04T10:19:47.007Z] Copying: 787/1024 [MB] (10 MBps) [2024-11-04T10:19:47.942Z] Copying: 798/1024 [MB] (11 MBps) [2024-11-04T10:19:49.315Z] Copying: 810/1024 [MB] (12 MBps) [2024-11-04T10:19:49.915Z] Copying: 821/1024 [MB] (10 MBps) [2024-11-04T10:19:51.309Z] Copying: 832/1024 [MB] (11 MBps) [2024-11-04T10:19:52.242Z] Copying: 844/1024 [MB] (11 MBps) [2024-11-04T10:19:53.176Z] Copying: 864/1024 [MB] (20 MBps) [2024-11-04T10:19:54.109Z] Copying: 882/1024 [MB] (17 MBps) [2024-11-04T10:19:55.055Z] Copying: 902/1024 [MB] (20 MBps) [2024-11-04T10:19:55.989Z] Copying: 921/1024 [MB] (18 MBps) [2024-11-04T10:19:56.923Z] Copying: 944/1024 [MB] (23 MBps) [2024-11-04T10:19:58.297Z] Copying: 963/1024 [MB] (19 MBps) [2024-11-04T10:19:59.231Z] Copying: 977/1024 [MB] (13 MBps) [2024-11-04T10:20:00.167Z] Copying: 996/1024 [MB] (19 MBps) [2024-11-04T10:20:00.733Z] Copying: 1012/1024 [MB] (15 MBps) [2024-11-04T10:20:00.733Z] Copying: 1024/1024 [MB] (average 22 MBps)[2024-11-04 10:20:00.464610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.464661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:54.988 [2024-11-04 10:20:00.464681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:54.988 [2024-11-04 10:20:00.464693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.464718] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:54.988 [2024-11-04 10:20:00.467616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.467793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:54.988 [2024-11-04 10:20:00.467816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.880 ms 00:19:54.988 [2024-11-04 10:20:00.467828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.469314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.469345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:54.988 [2024-11-04 10:20:00.469359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.447 ms 00:19:54.988 [2024-11-04 10:20:00.469371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.481818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.481943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:54.988 [2024-11-04 10:20:00.481966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.426 ms 00:19:54.988 [2024-11-04 10:20:00.481977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.488681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.488725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:54.988 [2024-11-04 10:20:00.488740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.669 ms 00:19:54.988 [2024-11-04 10:20:00.488752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.512347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.512537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:54.988 [2024-11-04 10:20:00.512562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.502 ms 00:19:54.988 [2024-11-04 10:20:00.512582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.527185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.527315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:54.988 [2024-11-04 10:20:00.527337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.548 ms 00:19:54.988 [2024-11-04 10:20:00.527349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.527503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.527523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:54.988 [2024-11-04 10:20:00.527537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:54.988 [2024-11-04 10:20:00.527554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.550756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.550956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:54.988 [2024-11-04 10:20:00.550980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.182 ms 00:19:54.988 [2024-11-04 10:20:00.550993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.573829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.573951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:54.988 [2024-11-04 10:20:00.573982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.800 ms 00:19:54.988 [2024-11-04 10:20:00.573993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.596968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.597096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:54.988 [2024-11-04 10:20:00.597118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.937 ms 00:19:54.988 [2024-11-04 10:20:00.597129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.620598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.988 [2024-11-04 10:20:00.620726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:54.988 [2024-11-04 10:20:00.620749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.403 ms 00:19:54.988 [2024-11-04 10:20:00.620760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.988 [2024-11-04 10:20:00.620822] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:54.988 [2024-11-04 10:20:00.620845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.620997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:54.988 [2024-11-04 10:20:00.621611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.621993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:54.989 [2024-11-04 10:20:00.622175] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:54.989 [2024-11-04 10:20:00.622194] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:19:54.989 [2024-11-04 10:20:00.622207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:54.989 [2024-11-04 10:20:00.622223] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:54.989 [2024-11-04 10:20:00.622236] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:54.989 [2024-11-04 10:20:00.622248] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:54.989 [2024-11-04 10:20:00.622260] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:54.989 [2024-11-04 10:20:00.622273] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:54.989 [2024-11-04 10:20:00.622284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:54.989 [2024-11-04 10:20:00.622304] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:54.989 [2024-11-04 10:20:00.622315] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:54.989 [2024-11-04 10:20:00.622328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.989 [2024-11-04 10:20:00.622341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:54.989 [2024-11-04 10:20:00.622355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.508 ms 00:19:54.989 [2024-11-04 10:20:00.622367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.636428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.989 [2024-11-04 10:20:00.636472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:54.989 [2024-11-04 10:20:00.636487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.033 ms 00:19:54.989 [2024-11-04 10:20:00.636499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.637015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:54.989 [2024-11-04 10:20:00.637049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:54.989 [2024-11-04 10:20:00.637063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:19:54.989 [2024-11-04 10:20:00.637075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.670111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.989 [2024-11-04 10:20:00.670150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:54.989 [2024-11-04 10:20:00.670165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.989 [2024-11-04 10:20:00.670177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.670257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.989 [2024-11-04 10:20:00.670272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:54.989 [2024-11-04 10:20:00.670287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.989 [2024-11-04 10:20:00.670302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.670418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.989 [2024-11-04 10:20:00.670436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:54.989 [2024-11-04 10:20:00.670452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.989 [2024-11-04 10:20:00.670466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:54.989 [2024-11-04 10:20:00.670490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:54.989 [2024-11-04 10:20:00.670505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:54.989 [2024-11-04 10:20:00.670519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:54.989 [2024-11-04 10:20:00.670534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.748308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.748366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.246 [2024-11-04 10:20:00.748385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.748396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.246 [2024-11-04 10:20:00.811116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.246 [2024-11-04 10:20:00.811229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.246 [2024-11-04 10:20:00.811336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.246 [2024-11-04 10:20:00.811505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:55.246 [2024-11-04 10:20:00.811588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.246 [2024-11-04 10:20:00.811675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:55.246 [2024-11-04 10:20:00.811758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.246 [2024-11-04 10:20:00.811771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:55.246 [2024-11-04 10:20:00.811812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.246 [2024-11-04 10:20:00.811964] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 347.306 ms, result 0 00:19:56.189 00:19:56.189 00:19:56.447 10:20:01 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:56.447 [2024-11-04 10:20:02.012908] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:19:56.447 [2024-11-04 10:20:02.013029] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75016 ] 00:19:56.447 [2024-11-04 10:20:02.171824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.706 [2024-11-04 10:20:02.271340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.964 [2024-11-04 10:20:02.522777] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.964 [2024-11-04 10:20:02.522854] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:56.964 [2024-11-04 10:20:02.681171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.964 [2024-11-04 10:20:02.681367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:56.964 [2024-11-04 10:20:02.681398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:56.964 [2024-11-04 10:20:02.681410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.964 [2024-11-04 10:20:02.681476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.964 [2024-11-04 10:20:02.681492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:56.964 [2024-11-04 10:20:02.681508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:56.964 [2024-11-04 10:20:02.681519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.964 [2024-11-04 10:20:02.681547] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:56.964 [2024-11-04 10:20:02.682528] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:56.964 [2024-11-04 10:20:02.682565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.964 [2024-11-04 10:20:02.682578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:56.964 [2024-11-04 10:20:02.682592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:19:56.964 [2024-11-04 10:20:02.682604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.964 [2024-11-04 10:20:02.683755] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:56.964 [2024-11-04 10:20:02.695934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.964 [2024-11-04 10:20:02.696079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:56.964 [2024-11-04 10:20:02.696102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.181 ms 00:19:56.964 [2024-11-04 10:20:02.696114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.964 [2024-11-04 10:20:02.696206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.964 [2024-11-04 10:20:02.696225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:56.965 [2024-11-04 10:20:02.696239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:56.965 [2024-11-04 10:20:02.696250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.701154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.701189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:56.965 [2024-11-04 10:20:02.701203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.818 ms 00:19:56.965 [2024-11-04 10:20:02.701214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.701307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.701322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:56.965 [2024-11-04 10:20:02.701335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:56.965 [2024-11-04 10:20:02.701347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.701421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.701436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:56.965 [2024-11-04 10:20:02.701450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:56.965 [2024-11-04 10:20:02.701462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.701494] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:56.965 [2024-11-04 10:20:02.704943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.704973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:56.965 [2024-11-04 10:20:02.704987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:19:56.965 [2024-11-04 10:20:02.705001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.705041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.705055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:56.965 [2024-11-04 10:20:02.705068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:56.965 [2024-11-04 10:20:02.705080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.705109] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:56.965 [2024-11-04 10:20:02.705134] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:56.965 [2024-11-04 10:20:02.705185] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:56.965 [2024-11-04 10:20:02.705212] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:56.965 [2024-11-04 10:20:02.705347] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:56.965 [2024-11-04 10:20:02.705371] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:56.965 [2024-11-04 10:20:02.705388] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:56.965 [2024-11-04 10:20:02.705404] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:56.965 [2024-11-04 10:20:02.705417] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:56.965 [2024-11-04 10:20:02.705428] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:56.965 [2024-11-04 10:20:02.705437] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:56.965 [2024-11-04 10:20:02.705448] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:56.965 [2024-11-04 10:20:02.705460] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:56.965 [2024-11-04 10:20:02.705476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.705489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:56.965 [2024-11-04 10:20:02.705501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.369 ms 00:19:56.965 [2024-11-04 10:20:02.705512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.705623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.965 [2024-11-04 10:20:02.705639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:56.965 [2024-11-04 10:20:02.705652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:19:56.965 [2024-11-04 10:20:02.705664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.965 [2024-11-04 10:20:02.705846] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:56.965 [2024-11-04 10:20:02.705869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:56.965 [2024-11-04 10:20:02.705883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.965 [2024-11-04 10:20:02.705897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.705910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:56.965 [2024-11-04 10:20:02.705921] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.705933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:56.965 [2024-11-04 10:20:02.705945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:56.965 [2024-11-04 10:20:02.705956] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:56.965 [2024-11-04 10:20:02.705969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.965 [2024-11-04 10:20:02.705980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:56.965 [2024-11-04 10:20:02.705992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:56.965 [2024-11-04 10:20:02.706003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:56.965 [2024-11-04 10:20:02.706014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:56.965 [2024-11-04 10:20:02.706027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:56.965 [2024-11-04 10:20:02.706045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706056] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:56.965 [2024-11-04 10:20:02.706068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:56.965 [2024-11-04 10:20:02.706102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:56.965 [2024-11-04 10:20:02.706137] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:56.965 [2024-11-04 10:20:02.706172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:56.965 [2024-11-04 10:20:02.706205] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:56.965 [2024-11-04 10:20:02.706239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.965 [2024-11-04 10:20:02.706262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:56.965 [2024-11-04 10:20:02.706273] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:56.965 [2024-11-04 10:20:02.706283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:56.965 [2024-11-04 10:20:02.706293] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:56.965 [2024-11-04 10:20:02.706303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:56.965 [2024-11-04 10:20:02.706313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:56.965 [2024-11-04 10:20:02.706332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:56.965 [2024-11-04 10:20:02.706338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:56.965 [2024-11-04 10:20:02.706345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:56.965 [2024-11-04 10:20:02.706354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:56.965 [2024-11-04 10:20:02.706363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:56.965 [2024-11-04 10:20:02.706370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:57.224 [2024-11-04 10:20:02.706378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:57.224 [2024-11-04 10:20:02.706385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:57.224 [2024-11-04 10:20:02.706392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:57.224 [2024-11-04 10:20:02.706398] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:57.224 [2024-11-04 10:20:02.706405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:57.224 [2024-11-04 10:20:02.706411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:57.224 [2024-11-04 10:20:02.706421] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:57.224 [2024-11-04 10:20:02.706430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.224 [2024-11-04 10:20:02.706438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:57.224 [2024-11-04 10:20:02.706446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:57.224 [2024-11-04 10:20:02.706453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:57.224 [2024-11-04 10:20:02.706459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:57.224 [2024-11-04 10:20:02.706466] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:57.224 [2024-11-04 10:20:02.706473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:57.224 [2024-11-04 10:20:02.706481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:57.224 [2024-11-04 10:20:02.706488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:57.224 [2024-11-04 10:20:02.706495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:57.224 [2024-11-04 10:20:02.706502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:57.225 [2024-11-04 10:20:02.706537] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:57.225 [2024-11-04 10:20:02.706545] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:57.225 [2024-11-04 10:20:02.706563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:57.225 [2024-11-04 10:20:02.706570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:57.225 [2024-11-04 10:20:02.706577] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:57.225 [2024-11-04 10:20:02.706585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.706593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:57.225 [2024-11-04 10:20:02.706600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.856 ms 00:19:57.225 [2024-11-04 10:20:02.706608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.732035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.732075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.225 [2024-11-04 10:20:02.732087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.380 ms 00:19:57.225 [2024-11-04 10:20:02.732094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.732184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.732197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:57.225 [2024-11-04 10:20:02.732204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:19:57.225 [2024-11-04 10:20:02.732212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.768114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.768156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.225 [2024-11-04 10:20:02.768170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.843 ms 00:19:57.225 [2024-11-04 10:20:02.768178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.768232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.768241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.225 [2024-11-04 10:20:02.768250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.225 [2024-11-04 10:20:02.768260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.768618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.768633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.225 [2024-11-04 10:20:02.768642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:19:57.225 [2024-11-04 10:20:02.768650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.768774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.768808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.225 [2024-11-04 10:20:02.768817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:19:57.225 [2024-11-04 10:20:02.768825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.781483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.781626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.225 [2024-11-04 10:20:02.781641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.636 ms 00:19:57.225 [2024-11-04 10:20:02.781653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.794015] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:57.225 [2024-11-04 10:20:02.794050] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:57.225 [2024-11-04 10:20:02.794062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.794070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:57.225 [2024-11-04 10:20:02.794080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.291 ms 00:19:57.225 [2024-11-04 10:20:02.794087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.818238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.818285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:57.225 [2024-11-04 10:20:02.818297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.112 ms 00:19:57.225 [2024-11-04 10:20:02.818305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.829976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.830014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:57.225 [2024-11-04 10:20:02.830025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.623 ms 00:19:57.225 [2024-11-04 10:20:02.830033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.841433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.841482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:57.225 [2024-11-04 10:20:02.841493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.362 ms 00:19:57.225 [2024-11-04 10:20:02.841500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.842140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.842157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:57.225 [2024-11-04 10:20:02.842166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:19:57.225 [2024-11-04 10:20:02.842174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.897498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.897553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:57.225 [2024-11-04 10:20:02.897567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.303 ms 00:19:57.225 [2024-11-04 10:20:02.897580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.908025] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:57.225 [2024-11-04 10:20:02.910689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.910720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:57.225 [2024-11-04 10:20:02.910734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.060 ms 00:19:57.225 [2024-11-04 10:20:02.910743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.910861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.910874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:57.225 [2024-11-04 10:20:02.910883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:57.225 [2024-11-04 10:20:02.910892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.910962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.910973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:57.225 [2024-11-04 10:20:02.910982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:57.225 [2024-11-04 10:20:02.910990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.911009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.911018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:57.225 [2024-11-04 10:20:02.911027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:57.225 [2024-11-04 10:20:02.911035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.911066] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:57.225 [2024-11-04 10:20:02.911078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.911087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:57.225 [2024-11-04 10:20:02.911095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:57.225 [2024-11-04 10:20:02.911103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.934121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.934163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:57.225 [2024-11-04 10:20:02.934178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.999 ms 00:19:57.225 [2024-11-04 10:20:02.934187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.225 [2024-11-04 10:20:02.934262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.225 [2024-11-04 10:20:02.934272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:57.226 [2024-11-04 10:20:02.934281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:19:57.226 [2024-11-04 10:20:02.934289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.226 [2024-11-04 10:20:02.935183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 253.579 ms, result 0 00:19:58.599  [2024-11-04T10:20:05.277Z] Copying: 16/1024 [MB] (16 MBps) [2024-11-04T10:20:06.210Z] Copying: 43/1024 [MB] (26 MBps) [2024-11-04T10:20:07.142Z] Copying: 63/1024 [MB] (20 MBps) [2024-11-04T10:20:08.516Z] Copying: 81/1024 [MB] (17 MBps) [2024-11-04T10:20:09.449Z] Copying: 99/1024 [MB] (18 MBps) [2024-11-04T10:20:10.381Z] Copying: 126/1024 [MB] (26 MBps) [2024-11-04T10:20:11.313Z] Copying: 138/1024 [MB] (11 MBps) [2024-11-04T10:20:12.246Z] Copying: 151/1024 [MB] (13 MBps) [2024-11-04T10:20:13.179Z] Copying: 173/1024 [MB] (22 MBps) [2024-11-04T10:20:14.115Z] Copying: 192/1024 [MB] (18 MBps) [2024-11-04T10:20:15.509Z] Copying: 215/1024 [MB] (23 MBps) [2024-11-04T10:20:16.445Z] Copying: 235/1024 [MB] (19 MBps) [2024-11-04T10:20:17.379Z] Copying: 245/1024 [MB] (10 MBps) [2024-11-04T10:20:18.313Z] Copying: 259/1024 [MB] (13 MBps) [2024-11-04T10:20:19.257Z] Copying: 273/1024 [MB] (14 MBps) [2024-11-04T10:20:20.205Z] Copying: 287/1024 [MB] (13 MBps) [2024-11-04T10:20:21.139Z] Copying: 300/1024 [MB] (13 MBps) [2024-11-04T10:20:22.514Z] Copying: 311/1024 [MB] (10 MBps) [2024-11-04T10:20:23.446Z] Copying: 324/1024 [MB] (12 MBps) [2024-11-04T10:20:24.378Z] Copying: 334/1024 [MB] (10 MBps) [2024-11-04T10:20:25.312Z] Copying: 345/1024 [MB] (11 MBps) [2024-11-04T10:20:26.243Z] Copying: 360/1024 [MB] (14 MBps) [2024-11-04T10:20:27.177Z] Copying: 374/1024 [MB] (14 MBps) [2024-11-04T10:20:28.550Z] Copying: 386/1024 [MB] (12 MBps) [2024-11-04T10:20:29.115Z] Copying: 399/1024 [MB] (12 MBps) [2024-11-04T10:20:30.517Z] Copying: 412/1024 [MB] (13 MBps) [2024-11-04T10:20:31.449Z] Copying: 426/1024 [MB] (13 MBps) [2024-11-04T10:20:32.381Z] Copying: 445/1024 [MB] (18 MBps) [2024-11-04T10:20:33.315Z] Copying: 458/1024 [MB] (13 MBps) [2024-11-04T10:20:34.246Z] Copying: 471/1024 [MB] (12 MBps) [2024-11-04T10:20:35.179Z] Copying: 490/1024 [MB] (19 MBps) [2024-11-04T10:20:36.113Z] Copying: 502/1024 [MB] (11 MBps) [2024-11-04T10:20:37.487Z] Copying: 514/1024 [MB] (12 MBps) [2024-11-04T10:20:38.419Z] Copying: 527/1024 [MB] (12 MBps) [2024-11-04T10:20:39.351Z] Copying: 540/1024 [MB] (13 MBps) [2024-11-04T10:20:40.341Z] Copying: 551/1024 [MB] (10 MBps) [2024-11-04T10:20:41.277Z] Copying: 565/1024 [MB] (14 MBps) [2024-11-04T10:20:42.211Z] Copying: 576/1024 [MB] (10 MBps) [2024-11-04T10:20:43.146Z] Copying: 588/1024 [MB] (11 MBps) [2024-11-04T10:20:44.539Z] Copying: 599/1024 [MB] (11 MBps) [2024-11-04T10:20:45.480Z] Copying: 610/1024 [MB] (11 MBps) [2024-11-04T10:20:46.420Z] Copying: 621/1024 [MB] (10 MBps) [2024-11-04T10:20:47.393Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-04T10:20:48.341Z] Copying: 644/1024 [MB] (11 MBps) [2024-11-04T10:20:49.341Z] Copying: 655/1024 [MB] (11 MBps) [2024-11-04T10:20:50.275Z] Copying: 666/1024 [MB] (10 MBps) [2024-11-04T10:20:51.211Z] Copying: 677/1024 [MB] (10 MBps) [2024-11-04T10:20:52.145Z] Copying: 703316/1048576 [kB] (10028 kBps) [2024-11-04T10:20:53.518Z] Copying: 698/1024 [MB] (11 MBps) [2024-11-04T10:20:54.469Z] Copying: 711/1024 [MB] (12 MBps) [2024-11-04T10:20:55.407Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-04T10:20:56.346Z] Copying: 732/1024 [MB] (10 MBps) [2024-11-04T10:20:57.285Z] Copying: 760176/1048576 [kB] (10172 kBps) [2024-11-04T10:20:58.224Z] Copying: 769972/1048576 [kB] (9796 kBps) [2024-11-04T10:20:59.164Z] Copying: 779844/1048576 [kB] (9872 kBps) [2024-11-04T10:21:00.546Z] Copying: 771/1024 [MB] (10 MBps) [2024-11-04T10:21:01.149Z] Copying: 782/1024 [MB] (11 MBps) [2024-11-04T10:21:02.534Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-04T10:21:03.468Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-04T10:21:04.403Z] Copying: 817/1024 [MB] (13 MBps) [2024-11-04T10:21:05.336Z] Copying: 829/1024 [MB] (11 MBps) [2024-11-04T10:21:06.270Z] Copying: 840/1024 [MB] (10 MBps) [2024-11-04T10:21:07.204Z] Copying: 853/1024 [MB] (13 MBps) [2024-11-04T10:21:08.136Z] Copying: 864/1024 [MB] (10 MBps) [2024-11-04T10:21:09.529Z] Copying: 876/1024 [MB] (11 MBps) [2024-11-04T10:21:10.461Z] Copying: 889/1024 [MB] (13 MBps) [2024-11-04T10:21:11.394Z] Copying: 902/1024 [MB] (12 MBps) [2024-11-04T10:21:12.327Z] Copying: 914/1024 [MB] (12 MBps) [2024-11-04T10:21:13.297Z] Copying: 925/1024 [MB] (11 MBps) [2024-11-04T10:21:14.231Z] Copying: 938/1024 [MB] (12 MBps) [2024-11-04T10:21:15.164Z] Copying: 950/1024 [MB] (12 MBps) [2024-11-04T10:21:16.120Z] Copying: 963/1024 [MB] (12 MBps) [2024-11-04T10:21:17.492Z] Copying: 978/1024 [MB] (15 MBps) [2024-11-04T10:21:18.426Z] Copying: 991/1024 [MB] (12 MBps) [2024-11-04T10:21:19.364Z] Copying: 1003/1024 [MB] (12 MBps) [2024-11-04T10:21:20.297Z] Copying: 1014/1024 [MB] (11 MBps) [2024-11-04T10:21:20.297Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-11-04 10:21:19.958650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.958721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:14.552 [2024-11-04 10:21:19.958736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:14.552 [2024-11-04 10:21:19.958746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:19.958768] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:14.552 [2024-11-04 10:21:19.961448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.961488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:14.552 [2024-11-04 10:21:19.961501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.643 ms 00:21:14.552 [2024-11-04 10:21:19.961518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:19.961744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.961754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:14.552 [2024-11-04 10:21:19.961763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:21:14.552 [2024-11-04 10:21:19.961771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:19.965764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.965807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:14.552 [2024-11-04 10:21:19.965818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.968 ms 00:21:14.552 [2024-11-04 10:21:19.965827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:19.972176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.972410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:14.552 [2024-11-04 10:21:19.972430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.326 ms 00:21:14.552 [2024-11-04 10:21:19.972439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:19.999492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:19.999721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:14.552 [2024-11-04 10:21:19.999739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.978 ms 00:21:14.552 [2024-11-04 10:21:19.999747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.016731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.016802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:14.552 [2024-11-04 10:21:20.016817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.779 ms 00:21:14.552 [2024-11-04 10:21:20.016826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.017009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.017020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:14.552 [2024-11-04 10:21:20.017032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:14.552 [2024-11-04 10:21:20.017039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.044172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.044219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:14.552 [2024-11-04 10:21:20.044232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.115 ms 00:21:14.552 [2024-11-04 10:21:20.044240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.070988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.071213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:14.552 [2024-11-04 10:21:20.071234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.695 ms 00:21:14.552 [2024-11-04 10:21:20.071243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.096753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.096828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:14.552 [2024-11-04 10:21:20.096841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.464 ms 00:21:14.552 [2024-11-04 10:21:20.096849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.121922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.552 [2024-11-04 10:21:20.122112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:14.552 [2024-11-04 10:21:20.122132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.975 ms 00:21:14.552 [2024-11-04 10:21:20.122139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.552 [2024-11-04 10:21:20.122182] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:14.552 [2024-11-04 10:21:20.122197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:14.552 [2024-11-04 10:21:20.122272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:14.553 [2024-11-04 10:21:20.122881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:14.554 [2024-11-04 10:21:20.122996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:14.554 [2024-11-04 10:21:20.123005] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:21:14.554 [2024-11-04 10:21:20.123015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:14.554 [2024-11-04 10:21:20.123023] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:14.554 [2024-11-04 10:21:20.123030] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:14.554 [2024-11-04 10:21:20.123037] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:14.554 [2024-11-04 10:21:20.123044] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:14.554 [2024-11-04 10:21:20.123052] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:14.554 [2024-11-04 10:21:20.123069] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:14.554 [2024-11-04 10:21:20.123076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:14.554 [2024-11-04 10:21:20.123082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:14.554 [2024-11-04 10:21:20.123090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.554 [2024-11-04 10:21:20.123097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:14.554 [2024-11-04 10:21:20.123106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:21:14.554 [2024-11-04 10:21:20.123113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.135559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.554 [2024-11-04 10:21:20.135613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:14.554 [2024-11-04 10:21:20.135625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.385 ms 00:21:14.554 [2024-11-04 10:21:20.135633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.136005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:14.554 [2024-11-04 10:21:20.136016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:14.554 [2024-11-04 10:21:20.136025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.347 ms 00:21:14.554 [2024-11-04 10:21:20.136032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.168856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.554 [2024-11-04 10:21:20.169063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:14.554 [2024-11-04 10:21:20.169081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.554 [2024-11-04 10:21:20.169090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.169160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.554 [2024-11-04 10:21:20.169170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:14.554 [2024-11-04 10:21:20.169179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.554 [2024-11-04 10:21:20.169187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.169260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.554 [2024-11-04 10:21:20.169270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:14.554 [2024-11-04 10:21:20.169278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.554 [2024-11-04 10:21:20.169287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.169303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.554 [2024-11-04 10:21:20.169311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:14.554 [2024-11-04 10:21:20.169320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.554 [2024-11-04 10:21:20.169328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.554 [2024-11-04 10:21:20.247975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.554 [2024-11-04 10:21:20.248182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:14.554 [2024-11-04 10:21:20.248202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.554 [2024-11-04 10:21:20.248210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:14.812 [2024-11-04 10:21:20.313570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.313579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:14.812 [2024-11-04 10:21:20.313685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.313692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:14.812 [2024-11-04 10:21:20.313743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.313750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:14.812 [2024-11-04 10:21:20.313883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.313890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:14.812 [2024-11-04 10:21:20.313937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.313945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.313977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.313988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:14.812 [2024-11-04 10:21:20.313995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.314003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.314042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:14.812 [2024-11-04 10:21:20.314052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:14.812 [2024-11-04 10:21:20.314059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:14.812 [2024-11-04 10:21:20.314067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:14.812 [2024-11-04 10:21:20.314173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 355.497 ms, result 0 00:21:15.379 00:21:15.379 00:21:15.379 10:21:20 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:17.937 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:17.938 10:21:23 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:17.938 [2024-11-04 10:21:23.211677] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:21:17.938 [2024-11-04 10:21:23.211830] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75855 ] 00:21:17.938 [2024-11-04 10:21:23.367619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.938 [2024-11-04 10:21:23.471752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:18.196 [2024-11-04 10:21:23.743675] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:18.196 [2024-11-04 10:21:23.743754] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:18.196 [2024-11-04 10:21:23.901085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.196 [2024-11-04 10:21:23.901132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:18.196 [2024-11-04 10:21:23.901149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:18.196 [2024-11-04 10:21:23.901157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.196 [2024-11-04 10:21:23.901212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.196 [2024-11-04 10:21:23.901222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:18.196 [2024-11-04 10:21:23.901232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:18.196 [2024-11-04 10:21:23.901239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.196 [2024-11-04 10:21:23.901259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:18.197 [2024-11-04 10:21:23.901968] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:18.197 [2024-11-04 10:21:23.902093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.902104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:18.197 [2024-11-04 10:21:23.902112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:21:18.197 [2024-11-04 10:21:23.902120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.903277] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:18.197 [2024-11-04 10:21:23.916993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.917253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:18.197 [2024-11-04 10:21:23.917317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.714 ms 00:21:18.197 [2024-11-04 10:21:23.917340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.917432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.918109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:18.197 [2024-11-04 10:21:23.918199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:21:18.197 [2024-11-04 10:21:23.918226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.924100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.924236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:18.197 [2024-11-04 10:21:23.924292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.708 ms 00:21:18.197 [2024-11-04 10:21:23.924326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.924429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.924451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:18.197 [2024-11-04 10:21:23.924472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:18.197 [2024-11-04 10:21:23.924490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.924556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.924580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:18.197 [2024-11-04 10:21:23.924646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:18.197 [2024-11-04 10:21:23.924668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.924707] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:18.197 [2024-11-04 10:21:23.928155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.928270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:18.197 [2024-11-04 10:21:23.928345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.455 ms 00:21:18.197 [2024-11-04 10:21:23.928375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.928435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.928459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:18.197 [2024-11-04 10:21:23.928482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:18.197 [2024-11-04 10:21:23.928502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.928571] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:18.197 [2024-11-04 10:21:23.928706] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:18.197 [2024-11-04 10:21:23.928775] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:18.197 [2024-11-04 10:21:23.928833] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:18.197 [2024-11-04 10:21:23.928965] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:18.197 [2024-11-04 10:21:23.929002] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:18.197 [2024-11-04 10:21:23.929038] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:18.197 [2024-11-04 10:21:23.929074] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:18.197 [2024-11-04 10:21:23.929160] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:18.197 [2024-11-04 10:21:23.929191] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:18.197 [2024-11-04 10:21:23.929210] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:18.197 [2024-11-04 10:21:23.929228] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:18.197 [2024-11-04 10:21:23.929277] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:18.197 [2024-11-04 10:21:23.929303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.929323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:18.197 [2024-11-04 10:21:23.929342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.736 ms 00:21:18.197 [2024-11-04 10:21:23.929360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.929463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.197 [2024-11-04 10:21:23.929489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:18.197 [2024-11-04 10:21:23.929508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:18.197 [2024-11-04 10:21:23.929543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.197 [2024-11-04 10:21:23.929659] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:18.197 [2024-11-04 10:21:23.929687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:18.197 [2024-11-04 10:21:23.929707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.197 [2024-11-04 10:21:23.929726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.929745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:18.197 [2024-11-04 10:21:23.929763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.929793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:18.197 [2024-11-04 10:21:23.929813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:18.197 [2024-11-04 10:21:23.929832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:18.197 [2024-11-04 10:21:23.929850] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.197 [2024-11-04 10:21:23.929868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:18.197 [2024-11-04 10:21:23.929927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:18.197 [2024-11-04 10:21:23.929949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:18.197 [2024-11-04 10:21:23.929968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:18.197 [2024-11-04 10:21:23.929987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:18.197 [2024-11-04 10:21:23.930013] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:18.197 [2024-11-04 10:21:23.930048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:18.197 [2024-11-04 10:21:23.930164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:18.197 [2024-11-04 10:21:23.930218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:18.197 [2024-11-04 10:21:23.930306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930370] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:18.197 [2024-11-04 10:21:23.930389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:18.197 [2024-11-04 10:21:23.930464] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.197 [2024-11-04 10:21:23.930501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:18.197 [2024-11-04 10:21:23.930519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:18.197 [2024-11-04 10:21:23.930557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:18.197 [2024-11-04 10:21:23.930578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:18.197 [2024-11-04 10:21:23.930598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:18.197 [2024-11-04 10:21:23.930616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:18.197 [2024-11-04 10:21:23.930651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:18.197 [2024-11-04 10:21:23.930669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930686] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:18.197 [2024-11-04 10:21:23.930705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:18.197 [2024-11-04 10:21:23.930754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:18.197 [2024-11-04 10:21:23.930776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:18.197 [2024-11-04 10:21:23.930806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:18.198 [2024-11-04 10:21:23.930826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:18.198 [2024-11-04 10:21:23.930844] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:18.198 [2024-11-04 10:21:23.930862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:18.198 [2024-11-04 10:21:23.930880] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:18.198 [2024-11-04 10:21:23.930947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:18.198 [2024-11-04 10:21:23.930968] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:18.198 [2024-11-04 10:21:23.930999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:18.198 [2024-11-04 10:21:23.931058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:18.198 [2024-11-04 10:21:23.931086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:18.198 [2024-11-04 10:21:23.931114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:18.198 [2024-11-04 10:21:23.931142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:18.198 [2024-11-04 10:21:23.931170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:18.198 [2024-11-04 10:21:23.931198] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:18.198 [2024-11-04 10:21:23.931260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:18.198 [2024-11-04 10:21:23.931292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:18.198 [2024-11-04 10:21:23.931320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:18.198 [2024-11-04 10:21:23.931521] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:18.198 [2024-11-04 10:21:23.931552] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:18.198 [2024-11-04 10:21:23.931647] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:18.198 [2024-11-04 10:21:23.931675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:18.198 [2024-11-04 10:21:23.931704] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:18.198 [2024-11-04 10:21:23.931763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.198 [2024-11-04 10:21:23.931794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:18.198 [2024-11-04 10:21:23.931816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.172 ms 00:21:18.198 [2024-11-04 10:21:23.931835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:23.958893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:23.959049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:18.456 [2024-11-04 10:21:23.959100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.991 ms 00:21:18.456 [2024-11-04 10:21:23.959123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:23.959236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:23.959251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:18.456 [2024-11-04 10:21:23.959260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:21:18.456 [2024-11-04 10:21:23.959267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.004358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.004417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:18.456 [2024-11-04 10:21:24.004431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.019 ms 00:21:18.456 [2024-11-04 10:21:24.004440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.004500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.004510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:18.456 [2024-11-04 10:21:24.004519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:18.456 [2024-11-04 10:21:24.004531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.005005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.005023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:18.456 [2024-11-04 10:21:24.005032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.392 ms 00:21:18.456 [2024-11-04 10:21:24.005040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.005175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.005189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:18.456 [2024-11-04 10:21:24.005197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:21:18.456 [2024-11-04 10:21:24.005205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.018745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.018995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:18.456 [2024-11-04 10:21:24.019013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.515 ms 00:21:18.456 [2024-11-04 10:21:24.019026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.032684] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:18.456 [2024-11-04 10:21:24.032746] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:18.456 [2024-11-04 10:21:24.032760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.032768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:18.456 [2024-11-04 10:21:24.032795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.603 ms 00:21:18.456 [2024-11-04 10:21:24.032804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.058474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.058732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:18.456 [2024-11-04 10:21:24.058753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.611 ms 00:21:18.456 [2024-11-04 10:21:24.058761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.072500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.072556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:18.456 [2024-11-04 10:21:24.072569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:21:18.456 [2024-11-04 10:21:24.072578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.456 [2024-11-04 10:21:24.085759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.456 [2024-11-04 10:21:24.085832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:18.456 [2024-11-04 10:21:24.085846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.122 ms 00:21:18.457 [2024-11-04 10:21:24.085854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.086530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.086550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:18.457 [2024-11-04 10:21:24.086560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:21:18.457 [2024-11-04 10:21:24.086568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.146571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.146636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:18.457 [2024-11-04 10:21:24.146650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.982 ms 00:21:18.457 [2024-11-04 10:21:24.146667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.158518] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:18.457 [2024-11-04 10:21:24.161451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.161492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:18.457 [2024-11-04 10:21:24.161505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.718 ms 00:21:18.457 [2024-11-04 10:21:24.161512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.161614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.161625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:18.457 [2024-11-04 10:21:24.161634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:18.457 [2024-11-04 10:21:24.161642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.161708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.161718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:18.457 [2024-11-04 10:21:24.161726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:18.457 [2024-11-04 10:21:24.161733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.161752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.161759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:18.457 [2024-11-04 10:21:24.161767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:18.457 [2024-11-04 10:21:24.161774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.161823] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:18.457 [2024-11-04 10:21:24.161835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.161843] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:18.457 [2024-11-04 10:21:24.161850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:21:18.457 [2024-11-04 10:21:24.161858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.187793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.188024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:18.457 [2024-11-04 10:21:24.188045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.915 ms 00:21:18.457 [2024-11-04 10:21:24.188054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.188151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:18.457 [2024-11-04 10:21:24.188161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:18.457 [2024-11-04 10:21:24.188170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:21:18.457 [2024-11-04 10:21:24.188177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:18.457 [2024-11-04 10:21:24.189231] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 287.698 ms, result 0 00:21:19.829  [2024-11-04T10:21:26.516Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-04T10:21:27.449Z] Copying: 35/1024 [MB] (13 MBps) [2024-11-04T10:21:28.384Z] Copying: 47/1024 [MB] (11 MBps) [2024-11-04T10:21:29.379Z] Copying: 59/1024 [MB] (11 MBps) [2024-11-04T10:21:30.312Z] Copying: 73/1024 [MB] (13 MBps) [2024-11-04T10:21:31.247Z] Copying: 85/1024 [MB] (12 MBps) [2024-11-04T10:21:32.241Z] Copying: 97/1024 [MB] (11 MBps) [2024-11-04T10:21:33.614Z] Copying: 108/1024 [MB] (11 MBps) [2024-11-04T10:21:34.548Z] Copying: 120/1024 [MB] (11 MBps) [2024-11-04T10:21:35.480Z] Copying: 134/1024 [MB] (14 MBps) [2024-11-04T10:21:36.411Z] Copying: 146/1024 [MB] (11 MBps) [2024-11-04T10:21:37.404Z] Copying: 159/1024 [MB] (13 MBps) [2024-11-04T10:21:38.335Z] Copying: 170/1024 [MB] (10 MBps) [2024-11-04T10:21:39.267Z] Copying: 183/1024 [MB] (13 MBps) [2024-11-04T10:21:40.638Z] Copying: 194/1024 [MB] (11 MBps) [2024-11-04T10:21:41.204Z] Copying: 209/1024 [MB] (14 MBps) [2024-11-04T10:21:42.577Z] Copying: 219/1024 [MB] (10 MBps) [2024-11-04T10:21:43.511Z] Copying: 231/1024 [MB] (11 MBps) [2024-11-04T10:21:44.473Z] Copying: 241/1024 [MB] (10 MBps) [2024-11-04T10:21:45.406Z] Copying: 252/1024 [MB] (11 MBps) [2024-11-04T10:21:46.341Z] Copying: 264/1024 [MB] (12 MBps) [2024-11-04T10:21:47.275Z] Copying: 276/1024 [MB] (11 MBps) [2024-11-04T10:21:48.210Z] Copying: 288/1024 [MB] (11 MBps) [2024-11-04T10:21:49.648Z] Copying: 299/1024 [MB] (10 MBps) [2024-11-04T10:21:50.214Z] Copying: 310/1024 [MB] (10 MBps) [2024-11-04T10:21:51.588Z] Copying: 321/1024 [MB] (10 MBps) [2024-11-04T10:21:52.521Z] Copying: 332/1024 [MB] (11 MBps) [2024-11-04T10:21:53.455Z] Copying: 349424/1048576 [kB] (9176 kBps) [2024-11-04T10:21:54.389Z] Copying: 351/1024 [MB] (10 MBps) [2024-11-04T10:21:55.324Z] Copying: 362/1024 [MB] (10 MBps) [2024-11-04T10:21:56.261Z] Copying: 373/1024 [MB] (10 MBps) [2024-11-04T10:21:57.635Z] Copying: 384/1024 [MB] (10 MBps) [2024-11-04T10:21:58.571Z] Copying: 394/1024 [MB] (10 MBps) [2024-11-04T10:21:59.506Z] Copying: 423/1024 [MB] (28 MBps) [2024-11-04T10:22:00.454Z] Copying: 461/1024 [MB] (38 MBps) [2024-11-04T10:22:01.384Z] Copying: 479/1024 [MB] (17 MBps) [2024-11-04T10:22:02.317Z] Copying: 501/1024 [MB] (22 MBps) [2024-11-04T10:22:03.251Z] Copying: 546/1024 [MB] (45 MBps) [2024-11-04T10:22:04.630Z] Copying: 576/1024 [MB] (29 MBps) [2024-11-04T10:22:05.564Z] Copying: 613/1024 [MB] (37 MBps) [2024-11-04T10:22:06.506Z] Copying: 651/1024 [MB] (38 MBps) [2024-11-04T10:22:07.447Z] Copying: 676/1024 [MB] (25 MBps) [2024-11-04T10:22:08.389Z] Copying: 707/1024 [MB] (30 MBps) [2024-11-04T10:22:09.331Z] Copying: 728/1024 [MB] (21 MBps) [2024-11-04T10:22:10.275Z] Copying: 756/1024 [MB] (28 MBps) [2024-11-04T10:22:11.215Z] Copying: 772/1024 [MB] (15 MBps) [2024-11-04T10:22:12.599Z] Copying: 794/1024 [MB] (22 MBps) [2024-11-04T10:22:13.589Z] Copying: 815/1024 [MB] (20 MBps) [2024-11-04T10:22:14.532Z] Copying: 846/1024 [MB] (31 MBps) [2024-11-04T10:22:15.475Z] Copying: 872/1024 [MB] (25 MBps) [2024-11-04T10:22:16.415Z] Copying: 900/1024 [MB] (28 MBps) [2024-11-04T10:22:17.357Z] Copying: 919/1024 [MB] (19 MBps) [2024-11-04T10:22:18.300Z] Copying: 955/1024 [MB] (36 MBps) [2024-11-04T10:22:19.240Z] Copying: 977/1024 [MB] (21 MBps) [2024-11-04T10:22:20.639Z] Copying: 996/1024 [MB] (19 MBps) [2024-11-04T10:22:21.208Z] Copying: 1018/1024 [MB] (21 MBps) [2024-11-04T10:22:21.468Z] Copying: 1048412/1048576 [kB] (5820 kBps) [2024-11-04T10:22:21.468Z] Copying: 1024/1024 [MB] (average 17 MBps)[2024-11-04 10:22:21.388642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.723 [2024-11-04 10:22:21.388709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:15.723 [2024-11-04 10:22:21.388723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:15.723 [2024-11-04 10:22:21.388733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.723 [2024-11-04 10:22:21.392399] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:15.723 [2024-11-04 10:22:21.396383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.723 [2024-11-04 10:22:21.396424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:15.723 [2024-11-04 10:22:21.396438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.841 ms 00:22:15.723 [2024-11-04 10:22:21.396449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.723 [2024-11-04 10:22:21.409395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.723 [2024-11-04 10:22:21.409450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:15.723 [2024-11-04 10:22:21.409462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.950 ms 00:22:15.723 [2024-11-04 10:22:21.409471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.723 [2024-11-04 10:22:21.434391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.723 [2024-11-04 10:22:21.434464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:15.723 [2024-11-04 10:22:21.434479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.890 ms 00:22:15.723 [2024-11-04 10:22:21.434487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.723 [2024-11-04 10:22:21.440708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.723 [2024-11-04 10:22:21.440761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:15.723 [2024-11-04 10:22:21.440772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.189 ms 00:22:15.723 [2024-11-04 10:22:21.440798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.723 [2024-11-04 10:22:21.466061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.984 [2024-11-04 10:22:21.466271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:15.984 [2024-11-04 10:22:21.466290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.203 ms 00:22:15.984 [2024-11-04 10:22:21.466297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.984 [2024-11-04 10:22:21.481095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.984 [2024-11-04 10:22:21.481280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:15.984 [2024-11-04 10:22:21.481311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.739 ms 00:22:15.984 [2024-11-04 10:22:21.481319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.984 [2024-11-04 10:22:21.648990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.984 [2024-11-04 10:22:21.649076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:15.984 [2024-11-04 10:22:21.649090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 167.612 ms 00:22:15.984 [2024-11-04 10:22:21.649098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.984 [2024-11-04 10:22:21.675388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.984 [2024-11-04 10:22:21.675438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:15.984 [2024-11-04 10:22:21.675451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.274 ms 00:22:15.984 [2024-11-04 10:22:21.675459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:15.984 [2024-11-04 10:22:21.701389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:15.984 [2024-11-04 10:22:21.701452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:15.984 [2024-11-04 10:22:21.701466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.875 ms 00:22:15.984 [2024-11-04 10:22:21.701474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.247 [2024-11-04 10:22:21.727530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.247 [2024-11-04 10:22:21.727590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:16.247 [2024-11-04 10:22:21.727603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.001 ms 00:22:16.247 [2024-11-04 10:22:21.727611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.247 [2024-11-04 10:22:21.753293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.247 [2024-11-04 10:22:21.753351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:16.247 [2024-11-04 10:22:21.753364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.558 ms 00:22:16.247 [2024-11-04 10:22:21.753372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.247 [2024-11-04 10:22:21.753426] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:16.247 [2024-11-04 10:22:21.753442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 115200 / 261120 wr_cnt: 1 state: open 00:22:16.247 [2024-11-04 10:22:21.753453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.753999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.754006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.754014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.754022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.754029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:16.247 [2024-11-04 10:22:21.754038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:16.248 [2024-11-04 10:22:21.754240] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:16.248 [2024-11-04 10:22:21.754248] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:22:16.248 [2024-11-04 10:22:21.754256] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 115200 00:22:16.248 [2024-11-04 10:22:21.754263] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 116160 00:22:16.248 [2024-11-04 10:22:21.754270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 115200 00:22:16.248 [2024-11-04 10:22:21.754279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0083 00:22:16.248 [2024-11-04 10:22:21.754285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:16.248 [2024-11-04 10:22:21.754293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:16.248 [2024-11-04 10:22:21.754313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:16.248 [2024-11-04 10:22:21.754320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:16.248 [2024-11-04 10:22:21.754326] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:16.248 [2024-11-04 10:22:21.754334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.248 [2024-11-04 10:22:21.754342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:16.248 [2024-11-04 10:22:21.754350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.909 ms 00:22:16.248 [2024-11-04 10:22:21.754357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.767284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.248 [2024-11-04 10:22:21.767481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:16.248 [2024-11-04 10:22:21.767500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.904 ms 00:22:16.248 [2024-11-04 10:22:21.767508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.767931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:16.248 [2024-11-04 10:22:21.767944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:16.248 [2024-11-04 10:22:21.767953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:22:16.248 [2024-11-04 10:22:21.767961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.802760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.802829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:16.248 [2024-11-04 10:22:21.802845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.802854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.802928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.802937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:16.248 [2024-11-04 10:22:21.802945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.802953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.803023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.803033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:16.248 [2024-11-04 10:22:21.803041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.803051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.803067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.803075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:16.248 [2024-11-04 10:22:21.803082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.803090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.886533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.886811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:16.248 [2024-11-04 10:22:21.886832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.886851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:16.248 [2024-11-04 10:22:21.956108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:16.248 [2024-11-04 10:22:21.956230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:16.248 [2024-11-04 10:22:21.956297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:16.248 [2024-11-04 10:22:21.956436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:16.248 [2024-11-04 10:22:21.956497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:16.248 [2024-11-04 10:22:21.956562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:16.248 [2024-11-04 10:22:21.956630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:16.248 [2024-11-04 10:22:21.956638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:16.248 [2024-11-04 10:22:21.956646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:16.248 [2024-11-04 10:22:21.956773] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 570.917 ms, result 0 00:22:17.656 00:22:17.656 00:22:17.657 10:22:23 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:17.657 [2024-11-04 10:22:23.376487] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:22:17.657 [2024-11-04 10:22:23.376642] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76466 ] 00:22:17.919 [2024-11-04 10:22:23.542956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:18.180 [2024-11-04 10:22:23.684648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.441 [2024-11-04 10:22:23.988687] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:18.441 [2024-11-04 10:22:23.988766] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:18.441 [2024-11-04 10:22:24.149386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.441 [2024-11-04 10:22:24.149452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:18.441 [2024-11-04 10:22:24.149470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:22:18.441 [2024-11-04 10:22:24.149479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.441 [2024-11-04 10:22:24.149535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.441 [2024-11-04 10:22:24.149547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:18.441 [2024-11-04 10:22:24.149557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:18.441 [2024-11-04 10:22:24.149565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.441 [2024-11-04 10:22:24.149585] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:18.441 [2024-11-04 10:22:24.150314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:18.441 [2024-11-04 10:22:24.150334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.441 [2024-11-04 10:22:24.150343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:18.441 [2024-11-04 10:22:24.150352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.754 ms 00:22:18.441 [2024-11-04 10:22:24.150360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.441 [2024-11-04 10:22:24.151810] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:18.441 [2024-11-04 10:22:24.165275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.441 [2024-11-04 10:22:24.165331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:18.441 [2024-11-04 10:22:24.165345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:22:18.441 [2024-11-04 10:22:24.165353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.165431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.165445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:18.442 [2024-11-04 10:22:24.165454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:22:18.442 [2024-11-04 10:22:24.165461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.171714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.171758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:18.442 [2024-11-04 10:22:24.171768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.174 ms 00:22:18.442 [2024-11-04 10:22:24.171776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.171867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.171876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:18.442 [2024-11-04 10:22:24.171884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:22:18.442 [2024-11-04 10:22:24.171892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.171934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.171944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:18.442 [2024-11-04 10:22:24.171952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:18.442 [2024-11-04 10:22:24.171960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.171982] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:18.442 [2024-11-04 10:22:24.175561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.175592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:18.442 [2024-11-04 10:22:24.175602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.584 ms 00:22:18.442 [2024-11-04 10:22:24.175613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.175644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.175652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:18.442 [2024-11-04 10:22:24.175660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:18.442 [2024-11-04 10:22:24.175668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.175704] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:18.442 [2024-11-04 10:22:24.175723] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:18.442 [2024-11-04 10:22:24.175758] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:18.442 [2024-11-04 10:22:24.175776] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:18.442 [2024-11-04 10:22:24.175890] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:18.442 [2024-11-04 10:22:24.175900] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:18.442 [2024-11-04 10:22:24.175911] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:18.442 [2024-11-04 10:22:24.175921] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:18.442 [2024-11-04 10:22:24.175930] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:18.442 [2024-11-04 10:22:24.175938] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:18.442 [2024-11-04 10:22:24.175945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:18.442 [2024-11-04 10:22:24.175952] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:18.442 [2024-11-04 10:22:24.175959] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:18.442 [2024-11-04 10:22:24.175970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.175977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:18.442 [2024-11-04 10:22:24.175985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:22:18.442 [2024-11-04 10:22:24.175992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.176074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.442 [2024-11-04 10:22:24.176082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:18.442 [2024-11-04 10:22:24.176089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:22:18.442 [2024-11-04 10:22:24.176096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.442 [2024-11-04 10:22:24.176213] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:18.442 [2024-11-04 10:22:24.176226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:18.442 [2024-11-04 10:22:24.176234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:18.442 [2024-11-04 10:22:24.176261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176275] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:18.442 [2024-11-04 10:22:24.176281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.442 [2024-11-04 10:22:24.176295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:18.442 [2024-11-04 10:22:24.176301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:18.442 [2024-11-04 10:22:24.176310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:18.442 [2024-11-04 10:22:24.176316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:18.442 [2024-11-04 10:22:24.176323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:18.442 [2024-11-04 10:22:24.176346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:18.442 [2024-11-04 10:22:24.176360] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:18.442 [2024-11-04 10:22:24.176380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:18.442 [2024-11-04 10:22:24.176400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:18.442 [2024-11-04 10:22:24.176419] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:18.442 [2024-11-04 10:22:24.176439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:18.442 [2024-11-04 10:22:24.176458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.442 [2024-11-04 10:22:24.176471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:18.442 [2024-11-04 10:22:24.176477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:18.442 [2024-11-04 10:22:24.176483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:18.442 [2024-11-04 10:22:24.176490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:18.442 [2024-11-04 10:22:24.176497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:18.442 [2024-11-04 10:22:24.176503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:18.442 [2024-11-04 10:22:24.176516] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:18.442 [2024-11-04 10:22:24.176522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176529] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:18.442 [2024-11-04 10:22:24.176538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:18.442 [2024-11-04 10:22:24.176545] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:18.442 [2024-11-04 10:22:24.176553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:18.442 [2024-11-04 10:22:24.176561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:18.442 [2024-11-04 10:22:24.176567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:18.442 [2024-11-04 10:22:24.176573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:18.442 [2024-11-04 10:22:24.176580] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:18.442 [2024-11-04 10:22:24.176586] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:18.442 [2024-11-04 10:22:24.176592] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:18.443 [2024-11-04 10:22:24.176601] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:18.443 [2024-11-04 10:22:24.176610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:18.443 [2024-11-04 10:22:24.176625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:18.443 [2024-11-04 10:22:24.176632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:18.443 [2024-11-04 10:22:24.176639] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:18.443 [2024-11-04 10:22:24.176646] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:18.443 [2024-11-04 10:22:24.176653] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:18.443 [2024-11-04 10:22:24.176660] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:18.443 [2024-11-04 10:22:24.176667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:18.443 [2024-11-04 10:22:24.176674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:18.443 [2024-11-04 10:22:24.176680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:18.443 [2024-11-04 10:22:24.176715] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:18.443 [2024-11-04 10:22:24.176722] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:18.443 [2024-11-04 10:22:24.176740] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:18.443 [2024-11-04 10:22:24.176747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:18.443 [2024-11-04 10:22:24.176754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:18.443 [2024-11-04 10:22:24.176767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.443 [2024-11-04 10:22:24.176775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:18.443 [2024-11-04 10:22:24.176793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.622 ms 00:22:18.443 [2024-11-04 10:22:24.176801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.204062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.204114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:18.704 [2024-11-04 10:22:24.204125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.214 ms 00:22:18.704 [2024-11-04 10:22:24.204133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.204230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.204242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:18.704 [2024-11-04 10:22:24.204250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:18.704 [2024-11-04 10:22:24.204257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.254040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.254097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:18.704 [2024-11-04 10:22:24.254111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.720 ms 00:22:18.704 [2024-11-04 10:22:24.254119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.254179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.254189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:18.704 [2024-11-04 10:22:24.254198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:18.704 [2024-11-04 10:22:24.254208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.254595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.254612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:18.704 [2024-11-04 10:22:24.254621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:22:18.704 [2024-11-04 10:22:24.254628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.254757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.254766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:18.704 [2024-11-04 10:22:24.254774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:22:18.704 [2024-11-04 10:22:24.254809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.267940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.267980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:18.704 [2024-11-04 10:22:24.267991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.106 ms 00:22:18.704 [2024-11-04 10:22:24.268002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.280386] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:18.704 [2024-11-04 10:22:24.280436] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:18.704 [2024-11-04 10:22:24.280449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.280458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:18.704 [2024-11-04 10:22:24.280468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.334 ms 00:22:18.704 [2024-11-04 10:22:24.280476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.304942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.305012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:18.704 [2024-11-04 10:22:24.305024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.416 ms 00:22:18.704 [2024-11-04 10:22:24.305032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.317266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.317326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:18.704 [2024-11-04 10:22:24.317338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.186 ms 00:22:18.704 [2024-11-04 10:22:24.317346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.329317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.329367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:18.704 [2024-11-04 10:22:24.329378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.924 ms 00:22:18.704 [2024-11-04 10:22:24.329385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.330049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.330072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:18.704 [2024-11-04 10:22:24.330081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.539 ms 00:22:18.704 [2024-11-04 10:22:24.330089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.386393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.386448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:18.704 [2024-11-04 10:22:24.386462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.281 ms 00:22:18.704 [2024-11-04 10:22:24.386475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.397270] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:18.704 [2024-11-04 10:22:24.400009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.400043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:18.704 [2024-11-04 10:22:24.400055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.480 ms 00:22:18.704 [2024-11-04 10:22:24.400064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.400173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.400184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:18.704 [2024-11-04 10:22:24.400193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:22:18.704 [2024-11-04 10:22:24.400201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.401584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.401618] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:18.704 [2024-11-04 10:22:24.401627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.344 ms 00:22:18.704 [2024-11-04 10:22:24.401635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.401660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.401668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:18.704 [2024-11-04 10:22:24.401676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:18.704 [2024-11-04 10:22:24.401683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.401720] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:18.704 [2024-11-04 10:22:24.401732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.401740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:18.704 [2024-11-04 10:22:24.401747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:22:18.704 [2024-11-04 10:22:24.401755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.425310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.425357] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:18.704 [2024-11-04 10:22:24.425369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:22:18.704 [2024-11-04 10:22:24.425377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.425458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:18.704 [2024-11-04 10:22:24.425468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:18.704 [2024-11-04 10:22:24.425477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:22:18.704 [2024-11-04 10:22:24.425485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:18.704 [2024-11-04 10:22:24.427352] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.055 ms, result 0 00:22:20.098  [2024-11-04T10:22:26.786Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-04T10:22:27.728Z] Copying: 64/1024 [MB] (35 MBps) [2024-11-04T10:22:28.672Z] Copying: 85/1024 [MB] (20 MBps) [2024-11-04T10:22:29.620Z] Copying: 111/1024 [MB] (26 MBps) [2024-11-04T10:22:31.004Z] Copying: 128/1024 [MB] (16 MBps) [2024-11-04T10:22:31.938Z] Copying: 147/1024 [MB] (18 MBps) [2024-11-04T10:22:32.871Z] Copying: 165/1024 [MB] (18 MBps) [2024-11-04T10:22:33.805Z] Copying: 184/1024 [MB] (19 MBps) [2024-11-04T10:22:34.739Z] Copying: 209/1024 [MB] (25 MBps) [2024-11-04T10:22:35.703Z] Copying: 239/1024 [MB] (30 MBps) [2024-11-04T10:22:36.636Z] Copying: 264/1024 [MB] (24 MBps) [2024-11-04T10:22:38.010Z] Copying: 282/1024 [MB] (18 MBps) [2024-11-04T10:22:38.941Z] Copying: 298/1024 [MB] (16 MBps) [2024-11-04T10:22:39.872Z] Copying: 315/1024 [MB] (16 MBps) [2024-11-04T10:22:40.805Z] Copying: 332/1024 [MB] (16 MBps) [2024-11-04T10:22:41.738Z] Copying: 354/1024 [MB] (22 MBps) [2024-11-04T10:22:42.670Z] Copying: 382/1024 [MB] (27 MBps) [2024-11-04T10:22:43.657Z] Copying: 399/1024 [MB] (17 MBps) [2024-11-04T10:22:45.031Z] Copying: 418/1024 [MB] (19 MBps) [2024-11-04T10:22:45.963Z] Copying: 430/1024 [MB] (11 MBps) [2024-11-04T10:22:46.915Z] Copying: 442/1024 [MB] (12 MBps) [2024-11-04T10:22:47.847Z] Copying: 457/1024 [MB] (14 MBps) [2024-11-04T10:22:48.778Z] Copying: 468/1024 [MB] (10 MBps) [2024-11-04T10:22:49.712Z] Copying: 479/1024 [MB] (11 MBps) [2024-11-04T10:22:50.648Z] Copying: 492/1024 [MB] (12 MBps) [2024-11-04T10:22:52.016Z] Copying: 504/1024 [MB] (12 MBps) [2024-11-04T10:22:52.948Z] Copying: 515/1024 [MB] (10 MBps) [2024-11-04T10:22:53.879Z] Copying: 526/1024 [MB] (10 MBps) [2024-11-04T10:22:54.813Z] Copying: 538/1024 [MB] (12 MBps) [2024-11-04T10:22:55.747Z] Copying: 550/1024 [MB] (12 MBps) [2024-11-04T10:22:56.681Z] Copying: 562/1024 [MB] (11 MBps) [2024-11-04T10:22:58.053Z] Copying: 573/1024 [MB] (11 MBps) [2024-11-04T10:22:58.619Z] Copying: 586/1024 [MB] (12 MBps) [2024-11-04T10:22:59.993Z] Copying: 597/1024 [MB] (11 MBps) [2024-11-04T10:23:00.929Z] Copying: 608/1024 [MB] (10 MBps) [2024-11-04T10:23:01.868Z] Copying: 620/1024 [MB] (11 MBps) [2024-11-04T10:23:02.808Z] Copying: 632/1024 [MB] (12 MBps) [2024-11-04T10:23:03.785Z] Copying: 642/1024 [MB] (10 MBps) [2024-11-04T10:23:04.719Z] Copying: 654/1024 [MB] (11 MBps) [2024-11-04T10:23:05.675Z] Copying: 669/1024 [MB] (14 MBps) [2024-11-04T10:23:07.050Z] Copying: 681/1024 [MB] (12 MBps) [2024-11-04T10:23:07.615Z] Copying: 693/1024 [MB] (11 MBps) [2024-11-04T10:23:08.987Z] Copying: 704/1024 [MB] (10 MBps) [2024-11-04T10:23:09.919Z] Copying: 715/1024 [MB] (11 MBps) [2024-11-04T10:23:10.852Z] Copying: 726/1024 [MB] (10 MBps) [2024-11-04T10:23:11.785Z] Copying: 736/1024 [MB] (10 MBps) [2024-11-04T10:23:12.718Z] Copying: 747/1024 [MB] (10 MBps) [2024-11-04T10:23:13.650Z] Copying: 757/1024 [MB] (10 MBps) [2024-11-04T10:23:15.024Z] Copying: 768/1024 [MB] (10 MBps) [2024-11-04T10:23:15.956Z] Copying: 797528/1048576 [kB] (10216 kBps) [2024-11-04T10:23:16.890Z] Copying: 789/1024 [MB] (10 MBps) [2024-11-04T10:23:17.849Z] Copying: 800/1024 [MB] (10 MBps) [2024-11-04T10:23:18.780Z] Copying: 812/1024 [MB] (11 MBps) [2024-11-04T10:23:19.713Z] Copying: 822/1024 [MB] (10 MBps) [2024-11-04T10:23:20.646Z] Copying: 833/1024 [MB] (10 MBps) [2024-11-04T10:23:22.019Z] Copying: 843/1024 [MB] (10 MBps) [2024-11-04T10:23:22.949Z] Copying: 855/1024 [MB] (11 MBps) [2024-11-04T10:23:23.879Z] Copying: 867/1024 [MB] (11 MBps) [2024-11-04T10:23:24.831Z] Copying: 877/1024 [MB] (10 MBps) [2024-11-04T10:23:25.765Z] Copying: 908716/1048576 [kB] (10020 kBps) [2024-11-04T10:23:26.698Z] Copying: 898/1024 [MB] (10 MBps) [2024-11-04T10:23:27.670Z] Copying: 908/1024 [MB] (10 MBps) [2024-11-04T10:23:29.040Z] Copying: 919/1024 [MB] (10 MBps) [2024-11-04T10:23:29.969Z] Copying: 929/1024 [MB] (10 MBps) [2024-11-04T10:23:30.898Z] Copying: 939/1024 [MB] (10 MBps) [2024-11-04T10:23:31.828Z] Copying: 950/1024 [MB] (10 MBps) [2024-11-04T10:23:32.761Z] Copying: 961/1024 [MB] (10 MBps) [2024-11-04T10:23:33.689Z] Copying: 971/1024 [MB] (10 MBps) [2024-11-04T10:23:34.650Z] Copying: 981/1024 [MB] (10 MBps) [2024-11-04T10:23:36.021Z] Copying: 993/1024 [MB] (11 MBps) [2024-11-04T10:23:36.668Z] Copying: 1005/1024 [MB] (12 MBps) [2024-11-04T10:23:37.603Z] Copying: 1017/1024 [MB] (11 MBps) [2024-11-04T10:23:37.603Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-11-04 10:23:37.309493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.309564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:31.858 [2024-11-04 10:23:37.309580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:31.858 [2024-11-04 10:23:37.309590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.309624] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:31.858 [2024-11-04 10:23:37.312534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.312567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:31.858 [2024-11-04 10:23:37.312579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.894 ms 00:23:31.858 [2024-11-04 10:23:37.312586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.312819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.312836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:31.858 [2024-11-04 10:23:37.312845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:23:31.858 [2024-11-04 10:23:37.312852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.317493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.317527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:31.858 [2024-11-04 10:23:37.317537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.626 ms 00:23:31.858 [2024-11-04 10:23:37.317545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.323951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.323987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:31.858 [2024-11-04 10:23:37.323998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.373 ms 00:23:31.858 [2024-11-04 10:23:37.324007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.349890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.349938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:31.858 [2024-11-04 10:23:37.349951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.814 ms 00:23:31.858 [2024-11-04 10:23:37.349958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:31.858 [2024-11-04 10:23:37.364528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:31.858 [2024-11-04 10:23:37.364577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:31.858 [2024-11-04 10:23:37.364599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.523 ms 00:23:31.858 [2024-11-04 10:23:37.364609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.750196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.117 [2024-11-04 10:23:37.750255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:32.117 [2024-11-04 10:23:37.750270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 385.531 ms 00:23:32.117 [2024-11-04 10:23:37.750279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.776330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.117 [2024-11-04 10:23:37.776384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:32.117 [2024-11-04 10:23:37.776398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.034 ms 00:23:32.117 [2024-11-04 10:23:37.776407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.800910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.117 [2024-11-04 10:23:37.800950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:32.117 [2024-11-04 10:23:37.800973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.453 ms 00:23:32.117 [2024-11-04 10:23:37.800981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.825866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.117 [2024-11-04 10:23:37.825915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:32.117 [2024-11-04 10:23:37.825927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.840 ms 00:23:32.117 [2024-11-04 10:23:37.825936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.850297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.117 [2024-11-04 10:23:37.850345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:32.117 [2024-11-04 10:23:37.850358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.261 ms 00:23:32.117 [2024-11-04 10:23:37.850367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.117 [2024-11-04 10:23:37.850411] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:32.117 [2024-11-04 10:23:37.850426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:32.117 [2024-11-04 10:23:37.850436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:32.117 [2024-11-04 10:23:37.850703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.850997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:32.118 [2024-11-04 10:23:37.851220] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:32.118 [2024-11-04 10:23:37.851227] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 29f73b09-5fd3-453f-95e2-8e762e97d9e7 00:23:32.118 [2024-11-04 10:23:37.851235] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:32.118 [2024-11-04 10:23:37.851244] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 16832 00:23:32.118 [2024-11-04 10:23:37.851251] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 15872 00:23:32.118 [2024-11-04 10:23:37.851259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0605 00:23:32.118 [2024-11-04 10:23:37.851266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:32.118 [2024-11-04 10:23:37.851274] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:32.118 [2024-11-04 10:23:37.851284] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:32.118 [2024-11-04 10:23:37.851297] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:32.118 [2024-11-04 10:23:37.851305] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:32.118 [2024-11-04 10:23:37.851312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.118 [2024-11-04 10:23:37.851320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:32.118 [2024-11-04 10:23:37.851329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.902 ms 00:23:32.118 [2024-11-04 10:23:37.851336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.863622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.376 [2024-11-04 10:23:37.863661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:32.376 [2024-11-04 10:23:37.863673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.268 ms 00:23:32.376 [2024-11-04 10:23:37.863681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.864074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:32.376 [2024-11-04 10:23:37.864085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:32.376 [2024-11-04 10:23:37.864093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:23:32.376 [2024-11-04 10:23:37.864101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.896526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:37.896570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:32.376 [2024-11-04 10:23:37.896586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:37.896594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.896659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:37.896672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:32.376 [2024-11-04 10:23:37.896680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:37.896687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.896748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:37.896757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:32.376 [2024-11-04 10:23:37.896765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:37.896775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.896801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:37.896809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:32.376 [2024-11-04 10:23:37.896818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:37.896825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:37.973721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:37.973764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:32.376 [2024-11-04 10:23:37.973775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:37.973806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.376 [2024-11-04 10:23:38.036717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.376 [2024-11-04 10:23:38.036757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:32.376 [2024-11-04 10:23:38.036768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.376 [2024-11-04 10:23:38.036776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.036857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.036865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:32.377 [2024-11-04 10:23:38.036873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.036881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.036918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.036927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:32.377 [2024-11-04 10:23:38.036935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.036942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.037024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.037033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:32.377 [2024-11-04 10:23:38.037041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.037048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.037078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.037089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:32.377 [2024-11-04 10:23:38.037097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.037104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.037137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.037146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:32.377 [2024-11-04 10:23:38.037153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.037161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.037201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:32.377 [2024-11-04 10:23:38.037210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:32.377 [2024-11-04 10:23:38.037218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:32.377 [2024-11-04 10:23:38.037225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:32.377 [2024-11-04 10:23:38.037331] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 727.813 ms, result 0 00:23:33.310 00:23:33.310 00:23:33.310 10:23:38 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:35.838 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:35.838 Process with pid 74308 is not found 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74308 00:23:35.838 10:23:41 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74308 ']' 00:23:35.838 10:23:41 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74308 00:23:35.838 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74308) - No such process 00:23:35.838 10:23:41 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74308 is not found' 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:35.838 Remove shared memory files 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:35.838 10:23:41 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:35.838 00:23:35.838 real 4m46.075s 00:23:35.838 user 4m36.089s 00:23:35.838 sys 0m10.536s 00:23:35.838 10:23:41 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:35.838 ************************************ 00:23:35.838 END TEST ftl_restore 00:23:35.838 ************************************ 00:23:35.838 10:23:41 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:35.838 10:23:41 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:35.838 10:23:41 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:35.838 10:23:41 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:35.838 10:23:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:35.838 ************************************ 00:23:35.838 START TEST ftl_dirty_shutdown 00:23:35.838 ************************************ 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:35.838 * Looking for test storage... 00:23:35.838 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:35.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.838 --rc genhtml_branch_coverage=1 00:23:35.838 --rc genhtml_function_coverage=1 00:23:35.838 --rc genhtml_legend=1 00:23:35.838 --rc geninfo_all_blocks=1 00:23:35.838 --rc geninfo_unexecuted_blocks=1 00:23:35.838 00:23:35.838 ' 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:35.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.838 --rc genhtml_branch_coverage=1 00:23:35.838 --rc genhtml_function_coverage=1 00:23:35.838 --rc genhtml_legend=1 00:23:35.838 --rc geninfo_all_blocks=1 00:23:35.838 --rc geninfo_unexecuted_blocks=1 00:23:35.838 00:23:35.838 ' 00:23:35.838 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:35.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.838 --rc genhtml_branch_coverage=1 00:23:35.838 --rc genhtml_function_coverage=1 00:23:35.838 --rc genhtml_legend=1 00:23:35.838 --rc geninfo_all_blocks=1 00:23:35.839 --rc geninfo_unexecuted_blocks=1 00:23:35.839 00:23:35.839 ' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:35.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:35.839 --rc genhtml_branch_coverage=1 00:23:35.839 --rc genhtml_function_coverage=1 00:23:35.839 --rc genhtml_legend=1 00:23:35.839 --rc geninfo_all_blocks=1 00:23:35.839 --rc geninfo_unexecuted_blocks=1 00:23:35.839 00:23:35.839 ' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77344 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77344 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77344 ']' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:35.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:35.839 10:23:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:35.839 [2024-11-04 10:23:41.515997] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:23:35.839 [2024-11-04 10:23:41.516121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77344 ] 00:23:36.098 [2024-11-04 10:23:41.668907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.098 [2024-11-04 10:23:41.771849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:36.662 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:36.920 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:37.177 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:37.178 { 00:23:37.178 "name": "nvme0n1", 00:23:37.178 "aliases": [ 00:23:37.178 "33424789-a24b-40f0-a3eb-f05d2ae28f56" 00:23:37.178 ], 00:23:37.178 "product_name": "NVMe disk", 00:23:37.178 "block_size": 4096, 00:23:37.178 "num_blocks": 1310720, 00:23:37.178 "uuid": "33424789-a24b-40f0-a3eb-f05d2ae28f56", 00:23:37.178 "numa_id": -1, 00:23:37.178 "assigned_rate_limits": { 00:23:37.178 "rw_ios_per_sec": 0, 00:23:37.178 "rw_mbytes_per_sec": 0, 00:23:37.178 "r_mbytes_per_sec": 0, 00:23:37.178 "w_mbytes_per_sec": 0 00:23:37.178 }, 00:23:37.178 "claimed": true, 00:23:37.178 "claim_type": "read_many_write_one", 00:23:37.178 "zoned": false, 00:23:37.178 "supported_io_types": { 00:23:37.178 "read": true, 00:23:37.178 "write": true, 00:23:37.178 "unmap": true, 00:23:37.178 "flush": true, 00:23:37.178 "reset": true, 00:23:37.178 "nvme_admin": true, 00:23:37.178 "nvme_io": true, 00:23:37.178 "nvme_io_md": false, 00:23:37.178 "write_zeroes": true, 00:23:37.178 "zcopy": false, 00:23:37.178 "get_zone_info": false, 00:23:37.178 "zone_management": false, 00:23:37.178 "zone_append": false, 00:23:37.178 "compare": true, 00:23:37.178 "compare_and_write": false, 00:23:37.178 "abort": true, 00:23:37.178 "seek_hole": false, 00:23:37.178 "seek_data": false, 00:23:37.178 "copy": true, 00:23:37.178 "nvme_iov_md": false 00:23:37.178 }, 00:23:37.178 "driver_specific": { 00:23:37.178 "nvme": [ 00:23:37.178 { 00:23:37.178 "pci_address": "0000:00:11.0", 00:23:37.178 "trid": { 00:23:37.178 "trtype": "PCIe", 00:23:37.178 "traddr": "0000:00:11.0" 00:23:37.178 }, 00:23:37.178 "ctrlr_data": { 00:23:37.178 "cntlid": 0, 00:23:37.178 "vendor_id": "0x1b36", 00:23:37.178 "model_number": "QEMU NVMe Ctrl", 00:23:37.178 "serial_number": "12341", 00:23:37.178 "firmware_revision": "8.0.0", 00:23:37.178 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:37.178 "oacs": { 00:23:37.178 "security": 0, 00:23:37.178 "format": 1, 00:23:37.178 "firmware": 0, 00:23:37.178 "ns_manage": 1 00:23:37.178 }, 00:23:37.178 "multi_ctrlr": false, 00:23:37.178 "ana_reporting": false 00:23:37.178 }, 00:23:37.178 "vs": { 00:23:37.178 "nvme_version": "1.4" 00:23:37.178 }, 00:23:37.178 "ns_data": { 00:23:37.178 "id": 1, 00:23:37.178 "can_share": false 00:23:37.178 } 00:23:37.178 } 00:23:37.178 ], 00:23:37.178 "mp_policy": "active_passive" 00:23:37.178 } 00:23:37.178 } 00:23:37.178 ]' 00:23:37.178 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:37.178 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:37.178 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:37.436 10:23:42 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:37.436 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=81295366-4806-4a54-b482-b599e89088d9 00:23:37.436 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:37.436 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 81295366-4806-4a54-b482-b599e89088d9 00:23:37.694 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:37.987 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=fb0e384f-1fc4-49c0-b085-29ef655510cb 00:23:37.987 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u fb0e384f-1fc4-49c0-b085-29ef655510cb 00:23:38.243 10:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:38.244 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.501 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:38.501 { 00:23:38.501 "name": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:38.501 "aliases": [ 00:23:38.501 "lvs/nvme0n1p0" 00:23:38.501 ], 00:23:38.501 "product_name": "Logical Volume", 00:23:38.501 "block_size": 4096, 00:23:38.501 "num_blocks": 26476544, 00:23:38.501 "uuid": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:38.501 "assigned_rate_limits": { 00:23:38.501 "rw_ios_per_sec": 0, 00:23:38.501 "rw_mbytes_per_sec": 0, 00:23:38.501 "r_mbytes_per_sec": 0, 00:23:38.501 "w_mbytes_per_sec": 0 00:23:38.501 }, 00:23:38.501 "claimed": false, 00:23:38.501 "zoned": false, 00:23:38.501 "supported_io_types": { 00:23:38.501 "read": true, 00:23:38.501 "write": true, 00:23:38.501 "unmap": true, 00:23:38.501 "flush": false, 00:23:38.501 "reset": true, 00:23:38.501 "nvme_admin": false, 00:23:38.501 "nvme_io": false, 00:23:38.501 "nvme_io_md": false, 00:23:38.501 "write_zeroes": true, 00:23:38.501 "zcopy": false, 00:23:38.501 "get_zone_info": false, 00:23:38.501 "zone_management": false, 00:23:38.501 "zone_append": false, 00:23:38.501 "compare": false, 00:23:38.501 "compare_and_write": false, 00:23:38.501 "abort": false, 00:23:38.501 "seek_hole": true, 00:23:38.501 "seek_data": true, 00:23:38.501 "copy": false, 00:23:38.501 "nvme_iov_md": false 00:23:38.501 }, 00:23:38.501 "driver_specific": { 00:23:38.501 "lvol": { 00:23:38.501 "lvol_store_uuid": "fb0e384f-1fc4-49c0-b085-29ef655510cb", 00:23:38.501 "base_bdev": "nvme0n1", 00:23:38.501 "thin_provision": true, 00:23:38.501 "num_allocated_clusters": 0, 00:23:38.501 "snapshot": false, 00:23:38.501 "clone": false, 00:23:38.501 "esnap_clone": false 00:23:38.501 } 00:23:38.501 } 00:23:38.501 } 00:23:38.501 ]' 00:23:38.501 10:23:43 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:38.501 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:38.759 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:39.017 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:39.017 { 00:23:39.017 "name": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:39.017 "aliases": [ 00:23:39.017 "lvs/nvme0n1p0" 00:23:39.017 ], 00:23:39.017 "product_name": "Logical Volume", 00:23:39.017 "block_size": 4096, 00:23:39.017 "num_blocks": 26476544, 00:23:39.017 "uuid": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:39.017 "assigned_rate_limits": { 00:23:39.017 "rw_ios_per_sec": 0, 00:23:39.017 "rw_mbytes_per_sec": 0, 00:23:39.017 "r_mbytes_per_sec": 0, 00:23:39.017 "w_mbytes_per_sec": 0 00:23:39.017 }, 00:23:39.017 "claimed": false, 00:23:39.017 "zoned": false, 00:23:39.017 "supported_io_types": { 00:23:39.018 "read": true, 00:23:39.018 "write": true, 00:23:39.018 "unmap": true, 00:23:39.018 "flush": false, 00:23:39.018 "reset": true, 00:23:39.018 "nvme_admin": false, 00:23:39.018 "nvme_io": false, 00:23:39.018 "nvme_io_md": false, 00:23:39.018 "write_zeroes": true, 00:23:39.018 "zcopy": false, 00:23:39.018 "get_zone_info": false, 00:23:39.018 "zone_management": false, 00:23:39.018 "zone_append": false, 00:23:39.018 "compare": false, 00:23:39.018 "compare_and_write": false, 00:23:39.018 "abort": false, 00:23:39.018 "seek_hole": true, 00:23:39.018 "seek_data": true, 00:23:39.018 "copy": false, 00:23:39.018 "nvme_iov_md": false 00:23:39.018 }, 00:23:39.018 "driver_specific": { 00:23:39.018 "lvol": { 00:23:39.018 "lvol_store_uuid": "fb0e384f-1fc4-49c0-b085-29ef655510cb", 00:23:39.018 "base_bdev": "nvme0n1", 00:23:39.018 "thin_provision": true, 00:23:39.018 "num_allocated_clusters": 0, 00:23:39.018 "snapshot": false, 00:23:39.018 "clone": false, 00:23:39.018 "esnap_clone": false 00:23:39.018 } 00:23:39.018 } 00:23:39.018 } 00:23:39.018 ]' 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:39.018 10:23:44 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:39.275 10:23:44 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ab4c1751-fee0-4520-84f1-ff8ea0756006 00:23:39.275 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:39.275 { 00:23:39.275 "name": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:39.275 "aliases": [ 00:23:39.275 "lvs/nvme0n1p0" 00:23:39.275 ], 00:23:39.275 "product_name": "Logical Volume", 00:23:39.275 "block_size": 4096, 00:23:39.275 "num_blocks": 26476544, 00:23:39.275 "uuid": "ab4c1751-fee0-4520-84f1-ff8ea0756006", 00:23:39.275 "assigned_rate_limits": { 00:23:39.275 "rw_ios_per_sec": 0, 00:23:39.275 "rw_mbytes_per_sec": 0, 00:23:39.275 "r_mbytes_per_sec": 0, 00:23:39.275 "w_mbytes_per_sec": 0 00:23:39.275 }, 00:23:39.275 "claimed": false, 00:23:39.275 "zoned": false, 00:23:39.275 "supported_io_types": { 00:23:39.275 "read": true, 00:23:39.275 "write": true, 00:23:39.275 "unmap": true, 00:23:39.275 "flush": false, 00:23:39.275 "reset": true, 00:23:39.275 "nvme_admin": false, 00:23:39.275 "nvme_io": false, 00:23:39.275 "nvme_io_md": false, 00:23:39.275 "write_zeroes": true, 00:23:39.275 "zcopy": false, 00:23:39.275 "get_zone_info": false, 00:23:39.275 "zone_management": false, 00:23:39.275 "zone_append": false, 00:23:39.275 "compare": false, 00:23:39.275 "compare_and_write": false, 00:23:39.275 "abort": false, 00:23:39.275 "seek_hole": true, 00:23:39.275 "seek_data": true, 00:23:39.275 "copy": false, 00:23:39.275 "nvme_iov_md": false 00:23:39.276 }, 00:23:39.276 "driver_specific": { 00:23:39.276 "lvol": { 00:23:39.276 "lvol_store_uuid": "fb0e384f-1fc4-49c0-b085-29ef655510cb", 00:23:39.276 "base_bdev": "nvme0n1", 00:23:39.276 "thin_provision": true, 00:23:39.276 "num_allocated_clusters": 0, 00:23:39.276 "snapshot": false, 00:23:39.276 "clone": false, 00:23:39.276 "esnap_clone": false 00:23:39.276 } 00:23:39.276 } 00:23:39.276 } 00:23:39.276 ]' 00:23:39.276 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d ab4c1751-fee0-4520-84f1-ff8ea0756006 --l2p_dram_limit 10' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:39.534 10:23:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ab4c1751-fee0-4520-84f1-ff8ea0756006 --l2p_dram_limit 10 -c nvc0n1p0 00:23:39.534 [2024-11-04 10:23:45.271498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.534 [2024-11-04 10:23:45.271553] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:39.534 [2024-11-04 10:23:45.271570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:39.534 [2024-11-04 10:23:45.271580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.534 [2024-11-04 10:23:45.271635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.534 [2024-11-04 10:23:45.271645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:39.534 [2024-11-04 10:23:45.271655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:39.534 [2024-11-04 10:23:45.271662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.534 [2024-11-04 10:23:45.271687] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:39.534 [2024-11-04 10:23:45.272399] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:39.534 [2024-11-04 10:23:45.272428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.534 [2024-11-04 10:23:45.272435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:39.534 [2024-11-04 10:23:45.272446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:23:39.534 [2024-11-04 10:23:45.272454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.534 [2024-11-04 10:23:45.272542] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 96f32096-1409-4fdb-af0d-af6bfac0fff4 00:23:39.534 [2024-11-04 10:23:45.273633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.534 [2024-11-04 10:23:45.273662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:39.534 [2024-11-04 10:23:45.273672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:39.534 [2024-11-04 10:23:45.273684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.279014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.279047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:39.792 [2024-11-04 10:23:45.279056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.289 ms 00:23:39.792 [2024-11-04 10:23:45.279067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.279150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.279161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:39.792 [2024-11-04 10:23:45.279169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:23:39.792 [2024-11-04 10:23:45.279182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.279228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.279240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:39.792 [2024-11-04 10:23:45.279248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:39.792 [2024-11-04 10:23:45.279257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.279280] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:39.792 [2024-11-04 10:23:45.282874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.282905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:39.792 [2024-11-04 10:23:45.282917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.598 ms 00:23:39.792 [2024-11-04 10:23:45.282928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.282962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.282970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:39.792 [2024-11-04 10:23:45.282979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:39.792 [2024-11-04 10:23:45.282986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.283012] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:39.792 [2024-11-04 10:23:45.283149] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:39.792 [2024-11-04 10:23:45.283165] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:39.792 [2024-11-04 10:23:45.283175] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:39.792 [2024-11-04 10:23:45.283186] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:39.792 [2024-11-04 10:23:45.283195] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:39.792 [2024-11-04 10:23:45.283204] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:39.792 [2024-11-04 10:23:45.283211] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:39.792 [2024-11-04 10:23:45.283220] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:39.792 [2024-11-04 10:23:45.283227] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:39.792 [2024-11-04 10:23:45.283238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.283245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:39.792 [2024-11-04 10:23:45.283255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.228 ms 00:23:39.792 [2024-11-04 10:23:45.283268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.283353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.792 [2024-11-04 10:23:45.283361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:39.792 [2024-11-04 10:23:45.283370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:23:39.792 [2024-11-04 10:23:45.283377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.792 [2024-11-04 10:23:45.283489] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:39.792 [2024-11-04 10:23:45.283501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:39.792 [2024-11-04 10:23:45.283511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.792 [2024-11-04 10:23:45.283518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.792 [2024-11-04 10:23:45.283527] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:39.792 [2024-11-04 10:23:45.283533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:39.792 [2024-11-04 10:23:45.283541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:39.792 [2024-11-04 10:23:45.283548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:39.792 [2024-11-04 10:23:45.283557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:39.792 [2024-11-04 10:23:45.283563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.792 [2024-11-04 10:23:45.283571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:39.792 [2024-11-04 10:23:45.283577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:39.792 [2024-11-04 10:23:45.283585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:39.792 [2024-11-04 10:23:45.283592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:39.792 [2024-11-04 10:23:45.283600] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:39.792 [2024-11-04 10:23:45.283606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.792 [2024-11-04 10:23:45.283616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:39.792 [2024-11-04 10:23:45.283622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:39.793 [2024-11-04 10:23:45.283649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:39.793 [2024-11-04 10:23:45.283670] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283678] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:39.793 [2024-11-04 10:23:45.283692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:39.793 [2024-11-04 10:23:45.283712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:39.793 [2024-11-04 10:23:45.283736] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283743] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.793 [2024-11-04 10:23:45.283752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:39.793 [2024-11-04 10:23:45.283758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:39.793 [2024-11-04 10:23:45.283766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:39.793 [2024-11-04 10:23:45.283772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:39.793 [2024-11-04 10:23:45.283791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:39.793 [2024-11-04 10:23:45.283798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:39.793 [2024-11-04 10:23:45.283812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:39.793 [2024-11-04 10:23:45.283820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283826] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:39.793 [2024-11-04 10:23:45.283838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:39.793 [2024-11-04 10:23:45.283845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:39.793 [2024-11-04 10:23:45.283860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:39.793 [2024-11-04 10:23:45.283872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:39.793 [2024-11-04 10:23:45.283878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:39.793 [2024-11-04 10:23:45.283887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:39.793 [2024-11-04 10:23:45.283893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:39.793 [2024-11-04 10:23:45.283902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:39.793 [2024-11-04 10:23:45.283912] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:39.793 [2024-11-04 10:23:45.283923] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.283931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:39.793 [2024-11-04 10:23:45.283940] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:39.793 [2024-11-04 10:23:45.283947] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:39.793 [2024-11-04 10:23:45.283955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:39.793 [2024-11-04 10:23:45.283962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:39.793 [2024-11-04 10:23:45.283971] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:39.793 [2024-11-04 10:23:45.283977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:39.793 [2024-11-04 10:23:45.283986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:39.793 [2024-11-04 10:23:45.283993] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:39.793 [2024-11-04 10:23:45.284003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284025] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:39.793 [2024-11-04 10:23:45.284041] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:39.793 [2024-11-04 10:23:45.284050] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284061] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:39.793 [2024-11-04 10:23:45.284070] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:39.793 [2024-11-04 10:23:45.284077] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:39.793 [2024-11-04 10:23:45.284086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:39.793 [2024-11-04 10:23:45.284093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:39.793 [2024-11-04 10:23:45.284103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:39.793 [2024-11-04 10:23:45.284110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:23:39.793 [2024-11-04 10:23:45.284118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:39.793 [2024-11-04 10:23:45.284154] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:39.793 [2024-11-04 10:23:45.284168] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:43.071 [2024-11-04 10:23:48.309444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.309512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:43.071 [2024-11-04 10:23:48.309527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3025.277 ms 00:23:43.071 [2024-11-04 10:23:48.309537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.071 [2024-11-04 10:23:48.335508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.335566] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.071 [2024-11-04 10:23:48.335579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.761 ms 00:23:43.071 [2024-11-04 10:23:48.335588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.071 [2024-11-04 10:23:48.335732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.335744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:43.071 [2024-11-04 10:23:48.335754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:43.071 [2024-11-04 10:23:48.335765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.071 [2024-11-04 10:23:48.366585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.366639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.071 [2024-11-04 10:23:48.366651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.750 ms 00:23:43.071 [2024-11-04 10:23:48.366661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.071 [2024-11-04 10:23:48.366703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.366714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.071 [2024-11-04 10:23:48.366723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:43.071 [2024-11-04 10:23:48.366734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.071 [2024-11-04 10:23:48.367136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.071 [2024-11-04 10:23:48.367196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.071 [2024-11-04 10:23:48.367206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.338 ms 00:23:43.072 [2024-11-04 10:23:48.367216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.367336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.367352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.072 [2024-11-04 10:23:48.367360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:23:43.072 [2024-11-04 10:23:48.367372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.381883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.381933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.072 [2024-11-04 10:23:48.381945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.490 ms 00:23:43.072 [2024-11-04 10:23:48.381957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.393647] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:43.072 [2024-11-04 10:23:48.396591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.396630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:43.072 [2024-11-04 10:23:48.396644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.537 ms 00:23:43.072 [2024-11-04 10:23:48.396652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.491351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.491423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:43.072 [2024-11-04 10:23:48.491441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 94.654 ms 00:23:43.072 [2024-11-04 10:23:48.491450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.491630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.491641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:43.072 [2024-11-04 10:23:48.491654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:23:43.072 [2024-11-04 10:23:48.491665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.515503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.515559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:43.072 [2024-11-04 10:23:48.515575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.776 ms 00:23:43.072 [2024-11-04 10:23:48.515584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.538555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.538610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:43.072 [2024-11-04 10:23:48.538625] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.934 ms 00:23:43.072 [2024-11-04 10:23:48.538633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.539211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.539232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:43.072 [2024-11-04 10:23:48.539243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.547 ms 00:23:43.072 [2024-11-04 10:23:48.539250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.611894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.611948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:43.072 [2024-11-04 10:23:48.611967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.569 ms 00:23:43.072 [2024-11-04 10:23:48.611975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.637611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.637668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:43.072 [2024-11-04 10:23:48.637684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.543 ms 00:23:43.072 [2024-11-04 10:23:48.637693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.662810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.662865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:43.072 [2024-11-04 10:23:48.662879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.064 ms 00:23:43.072 [2024-11-04 10:23:48.662887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.687856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.687923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:43.072 [2024-11-04 10:23:48.687937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.918 ms 00:23:43.072 [2024-11-04 10:23:48.687945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.687993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.688002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:43.072 [2024-11-04 10:23:48.688016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:43.072 [2024-11-04 10:23:48.688023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.688108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.072 [2024-11-04 10:23:48.688118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:43.072 [2024-11-04 10:23:48.688128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:23:43.072 [2024-11-04 10:23:48.688136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.072 [2024-11-04 10:23:48.689151] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3417.213 ms, result 0 00:23:43.072 { 00:23:43.072 "name": "ftl0", 00:23:43.072 "uuid": "96f32096-1409-4fdb-af0d-af6bfac0fff4" 00:23:43.072 } 00:23:43.072 10:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:43.072 10:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:43.329 10:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:43.329 10:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:43.329 10:23:48 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:43.588 /dev/nbd0 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:43.588 1+0 records in 00:23:43.588 1+0 records out 00:23:43.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000605098 s, 6.8 MB/s 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:43.588 10:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:43.588 [2024-11-04 10:23:49.229547] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:23:43.588 [2024-11-04 10:23:49.229678] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77476 ] 00:23:43.846 [2024-11-04 10:23:49.381274] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.846 [2024-11-04 10:23:49.483715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.219  [2024-11-04T10:23:51.897Z] Copying: 191/1024 [MB] (191 MBps) [2024-11-04T10:23:52.828Z] Copying: 387/1024 [MB] (196 MBps) [2024-11-04T10:23:53.761Z] Copying: 583/1024 [MB] (196 MBps) [2024-11-04T10:23:55.131Z] Copying: 778/1024 [MB] (195 MBps) [2024-11-04T10:23:55.131Z] Copying: 972/1024 [MB] (193 MBps) [2024-11-04T10:23:55.696Z] Copying: 1024/1024 [MB] (average 194 MBps) 00:23:49.951 00:23:49.951 10:23:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:52.475 10:23:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:52.475 [2024-11-04 10:23:57.822469] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:23:52.475 [2024-11-04 10:23:57.822595] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77568 ] 00:23:52.475 [2024-11-04 10:23:57.982978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:52.475 [2024-11-04 10:23:58.085017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.848  [2024-11-04T10:24:00.551Z] Copying: 24/1024 [MB] (24 MBps) [2024-11-04T10:24:01.482Z] Copying: 25481216/1073741824 [B] (0 Bps) [2024-11-04T10:24:02.415Z] Copying: 41/1024 [MB] (16 MBps) [2024-11-04T10:24:03.349Z] Copying: 69/1024 [MB] (27 MBps) [2024-11-04T10:24:04.332Z] Copying: 99/1024 [MB] (29 MBps) [2024-11-04T10:24:05.705Z] Copying: 128/1024 [MB] (29 MBps) [2024-11-04T10:24:06.639Z] Copying: 158/1024 [MB] (29 MBps) [2024-11-04T10:24:07.574Z] Copying: 188/1024 [MB] (29 MBps) [2024-11-04T10:24:08.506Z] Copying: 214/1024 [MB] (26 MBps) [2024-11-04T10:24:09.467Z] Copying: 243/1024 [MB] (28 MBps) [2024-11-04T10:24:10.400Z] Copying: 270/1024 [MB] (26 MBps) [2024-11-04T10:24:11.332Z] Copying: 299/1024 [MB] (29 MBps) [2024-11-04T10:24:12.717Z] Copying: 330/1024 [MB] (30 MBps) [2024-11-04T10:24:13.681Z] Copying: 356/1024 [MB] (26 MBps) [2024-11-04T10:24:14.644Z] Copying: 386/1024 [MB] (29 MBps) [2024-11-04T10:24:15.577Z] Copying: 416/1024 [MB] (29 MBps) [2024-11-04T10:24:16.548Z] Copying: 442/1024 [MB] (26 MBps) [2024-11-04T10:24:17.481Z] Copying: 472/1024 [MB] (29 MBps) [2024-11-04T10:24:18.415Z] Copying: 501/1024 [MB] (29 MBps) [2024-11-04T10:24:19.348Z] Copying: 531/1024 [MB] (29 MBps) [2024-11-04T10:24:20.718Z] Copying: 561/1024 [MB] (29 MBps) [2024-11-04T10:24:21.681Z] Copying: 590/1024 [MB] (28 MBps) [2024-11-04T10:24:22.614Z] Copying: 619/1024 [MB] (29 MBps) [2024-11-04T10:24:23.566Z] Copying: 648/1024 [MB] (29 MBps) [2024-11-04T10:24:24.498Z] Copying: 677/1024 [MB] (28 MBps) [2024-11-04T10:24:25.494Z] Copying: 706/1024 [MB] (29 MBps) [2024-11-04T10:24:26.427Z] Copying: 734/1024 [MB] (28 MBps) [2024-11-04T10:24:27.358Z] Copying: 763/1024 [MB] (28 MBps) [2024-11-04T10:24:28.729Z] Copying: 793/1024 [MB] (29 MBps) [2024-11-04T10:24:29.661Z] Copying: 811/1024 [MB] (17 MBps) [2024-11-04T10:24:30.593Z] Copying: 838/1024 [MB] (27 MBps) [2024-11-04T10:24:31.527Z] Copying: 868/1024 [MB] (30 MBps) [2024-11-04T10:24:32.471Z] Copying: 897/1024 [MB] (29 MBps) [2024-11-04T10:24:33.404Z] Copying: 926/1024 [MB] (29 MBps) [2024-11-04T10:24:34.363Z] Copying: 956/1024 [MB] (29 MBps) [2024-11-04T10:24:35.309Z] Copying: 985/1024 [MB] (29 MBps) [2024-11-04T10:24:35.874Z] Copying: 1015/1024 [MB] (29 MBps) [2024-11-04T10:24:36.477Z] Copying: 1024/1024 [MB] (average 27 MBps) 00:24:30.732 00:24:30.732 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:30.732 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:30.732 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:30.991 [2024-11-04 10:24:36.599901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.599953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:30.991 [2024-11-04 10:24:36.599965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:30.991 [2024-11-04 10:24:36.599973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.599994] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:30.991 [2024-11-04 10:24:36.602129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.602160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:30.991 [2024-11-04 10:24:36.602171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.119 ms 00:24:30.991 [2024-11-04 10:24:36.602177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.603937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.603964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:30.991 [2024-11-04 10:24:36.603973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.735 ms 00:24:30.991 [2024-11-04 10:24:36.603979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.615862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.615894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:30.991 [2024-11-04 10:24:36.615908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.865 ms 00:24:30.991 [2024-11-04 10:24:36.615916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.620799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.620827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:30.991 [2024-11-04 10:24:36.620837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.853 ms 00:24:30.991 [2024-11-04 10:24:36.620844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.639375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.639413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:30.991 [2024-11-04 10:24:36.639425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.464 ms 00:24:30.991 [2024-11-04 10:24:36.639432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.651994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.652034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:30.991 [2024-11-04 10:24:36.652046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.520 ms 00:24:30.991 [2024-11-04 10:24:36.652053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.991 [2024-11-04 10:24:36.652175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.991 [2024-11-04 10:24:36.652184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:30.991 [2024-11-04 10:24:36.652192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:24:30.992 [2024-11-04 10:24:36.652198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.992 [2024-11-04 10:24:36.671511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.992 [2024-11-04 10:24:36.671559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:30.992 [2024-11-04 10:24:36.671571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.295 ms 00:24:30.992 [2024-11-04 10:24:36.671578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.992 [2024-11-04 10:24:36.689362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.992 [2024-11-04 10:24:36.689401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:30.992 [2024-11-04 10:24:36.689413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:24:30.992 [2024-11-04 10:24:36.689419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.992 [2024-11-04 10:24:36.707016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.992 [2024-11-04 10:24:36.707059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:30.992 [2024-11-04 10:24:36.707071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.552 ms 00:24:30.992 [2024-11-04 10:24:36.707077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.992 [2024-11-04 10:24:36.724773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.992 [2024-11-04 10:24:36.724829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:30.992 [2024-11-04 10:24:36.724840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.610 ms 00:24:30.992 [2024-11-04 10:24:36.724847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:30.992 [2024-11-04 10:24:36.724886] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:30.992 [2024-11-04 10:24:36.724899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.724999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:30.992 [2024-11-04 10:24:36.725430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:30.993 [2024-11-04 10:24:36.725601] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:30.993 [2024-11-04 10:24:36.725609] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96f32096-1409-4fdb-af0d-af6bfac0fff4 00:24:30.993 [2024-11-04 10:24:36.725615] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:30.993 [2024-11-04 10:24:36.725623] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:30.993 [2024-11-04 10:24:36.725628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:30.993 [2024-11-04 10:24:36.725646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:30.993 [2024-11-04 10:24:36.725652] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:30.993 [2024-11-04 10:24:36.725661] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:30.993 [2024-11-04 10:24:36.725667] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:30.993 [2024-11-04 10:24:36.725674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:30.993 [2024-11-04 10:24:36.725679] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:30.993 [2024-11-04 10:24:36.725686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:30.993 [2024-11-04 10:24:36.725692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:30.993 [2024-11-04 10:24:36.725700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:24:30.993 [2024-11-04 10:24:36.725706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.735504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.251 [2024-11-04 10:24:36.735545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:31.251 [2024-11-04 10:24:36.735555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.765 ms 00:24:31.251 [2024-11-04 10:24:36.735563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.735859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:31.251 [2024-11-04 10:24:36.735872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:31.251 [2024-11-04 10:24:36.735880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.269 ms 00:24:31.251 [2024-11-04 10:24:36.735886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.768455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.768500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:31.251 [2024-11-04 10:24:36.768513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.768519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.768580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.768587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:31.251 [2024-11-04 10:24:36.768594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.768600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.768702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.768710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:31.251 [2024-11-04 10:24:36.768718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.768723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.768742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.768748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:31.251 [2024-11-04 10:24:36.768755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.768760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.829090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.829141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:31.251 [2024-11-04 10:24:36.829152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.829162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.878836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.878887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:31.251 [2024-11-04 10:24:36.878899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.878905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.878974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.878981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:31.251 [2024-11-04 10:24:36.878990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.878996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.879077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:31.251 [2024-11-04 10:24:36.879085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.879090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.879170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:31.251 [2024-11-04 10:24:36.879178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.879183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.879218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:31.251 [2024-11-04 10:24:36.879225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.879231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.879267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:31.251 [2024-11-04 10:24:36.879274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.879280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:31.251 [2024-11-04 10:24:36.879339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:31.251 [2024-11-04 10:24:36.879347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:31.251 [2024-11-04 10:24:36.879353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:31.251 [2024-11-04 10:24:36.879454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.528 ms, result 0 00:24:31.251 true 00:24:31.251 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77344 00:24:31.251 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77344 00:24:31.251 10:24:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:31.251 [2024-11-04 10:24:36.974513] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:24:31.252 [2024-11-04 10:24:36.974643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77983 ] 00:24:31.509 [2024-11-04 10:24:37.130446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.509 [2024-11-04 10:24:37.213228] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.880  [2024-11-04T10:24:39.558Z] Copying: 251/1024 [MB] (251 MBps) [2024-11-04T10:24:40.497Z] Copying: 512/1024 [MB] (260 MBps) [2024-11-04T10:24:41.429Z] Copying: 769/1024 [MB] (257 MBps) [2024-11-04T10:24:41.994Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:24:36.249 00:24:36.249 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77344 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:36.249 10:24:41 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:36.507 [2024-11-04 10:24:42.020138] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:24:36.507 [2024-11-04 10:24:42.020266] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78037 ] 00:24:36.507 [2024-11-04 10:24:42.180284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.765 [2024-11-04 10:24:42.282948] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:37.022 [2024-11-04 10:24:42.538932] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:37.022 [2024-11-04 10:24:42.538999] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:37.022 [2024-11-04 10:24:42.604622] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:37.022 [2024-11-04 10:24:42.604974] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:37.022 [2024-11-04 10:24:42.605200] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:37.281 [2024-11-04 10:24:42.790579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.790635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:37.281 [2024-11-04 10:24:42.790649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:37.281 [2024-11-04 10:24:42.790657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.790709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.790720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:37.281 [2024-11-04 10:24:42.790728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:24:37.281 [2024-11-04 10:24:42.790735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.790754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:37.281 [2024-11-04 10:24:42.791476] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:37.281 [2024-11-04 10:24:42.791498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.791506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:37.281 [2024-11-04 10:24:42.791515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:24:37.281 [2024-11-04 10:24:42.791521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.792774] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:37.281 [2024-11-04 10:24:42.804998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.805039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:37.281 [2024-11-04 10:24:42.805051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.226 ms 00:24:37.281 [2024-11-04 10:24:42.805058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.805112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.805122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:37.281 [2024-11-04 10:24:42.805130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:24:37.281 [2024-11-04 10:24:42.805137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.810619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.810652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:37.281 [2024-11-04 10:24:42.810662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.436 ms 00:24:37.281 [2024-11-04 10:24:42.810670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.810737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.810746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:37.281 [2024-11-04 10:24:42.810754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:37.281 [2024-11-04 10:24:42.810761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.810825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.810839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:37.281 [2024-11-04 10:24:42.810847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:37.281 [2024-11-04 10:24:42.810855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.810876] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:37.281 [2024-11-04 10:24:42.814127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.814163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:37.281 [2024-11-04 10:24:42.814172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.256 ms 00:24:37.281 [2024-11-04 10:24:42.814180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.814208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.814215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:37.281 [2024-11-04 10:24:42.814224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:37.281 [2024-11-04 10:24:42.814231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.814250] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:37.281 [2024-11-04 10:24:42.814271] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:37.281 [2024-11-04 10:24:42.814305] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:37.281 [2024-11-04 10:24:42.814319] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:37.281 [2024-11-04 10:24:42.814420] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:37.281 [2024-11-04 10:24:42.814430] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:37.281 [2024-11-04 10:24:42.814440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:37.281 [2024-11-04 10:24:42.814451] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:37.281 [2024-11-04 10:24:42.814462] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:37.281 [2024-11-04 10:24:42.814471] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:37.281 [2024-11-04 10:24:42.814479] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:37.281 [2024-11-04 10:24:42.814486] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:37.281 [2024-11-04 10:24:42.814492] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:37.281 [2024-11-04 10:24:42.814500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.814507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:37.281 [2024-11-04 10:24:42.814514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:37.281 [2024-11-04 10:24:42.814521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.814603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.281 [2024-11-04 10:24:42.814612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:37.281 [2024-11-04 10:24:42.814621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:37.281 [2024-11-04 10:24:42.814628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.281 [2024-11-04 10:24:42.814728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:37.281 [2024-11-04 10:24:42.814739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:37.281 [2024-11-04 10:24:42.814747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.281 [2024-11-04 10:24:42.814755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.281 [2024-11-04 10:24:42.814762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:37.281 [2024-11-04 10:24:42.814769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:37.282 [2024-11-04 10:24:42.814802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.282 [2024-11-04 10:24:42.814816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:37.282 [2024-11-04 10:24:42.814830] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:37.282 [2024-11-04 10:24:42.814836] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:37.282 [2024-11-04 10:24:42.814843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:37.282 [2024-11-04 10:24:42.814850] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:37.282 [2024-11-04 10:24:42.814856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:37.282 [2024-11-04 10:24:42.814870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:37.282 [2024-11-04 10:24:42.814889] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:37.282 [2024-11-04 10:24:42.814908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:37.282 [2024-11-04 10:24:42.814927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814933] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:37.282 [2024-11-04 10:24:42.814946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:37.282 [2024-11-04 10:24:42.814958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:37.282 [2024-11-04 10:24:42.814964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:37.282 [2024-11-04 10:24:42.814971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.282 [2024-11-04 10:24:42.814977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:37.282 [2024-11-04 10:24:42.814983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:37.282 [2024-11-04 10:24:42.814990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:37.282 [2024-11-04 10:24:42.814996] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:37.282 [2024-11-04 10:24:42.815002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:37.282 [2024-11-04 10:24:42.815008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.815015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:37.282 [2024-11-04 10:24:42.815021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:37.282 [2024-11-04 10:24:42.815028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.815035] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:37.282 [2024-11-04 10:24:42.815043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:37.282 [2024-11-04 10:24:42.815049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:37.282 [2024-11-04 10:24:42.815059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:37.282 [2024-11-04 10:24:42.815066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:37.282 [2024-11-04 10:24:42.815072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:37.282 [2024-11-04 10:24:42.815079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:37.282 [2024-11-04 10:24:42.815085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:37.282 [2024-11-04 10:24:42.815092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:37.282 [2024-11-04 10:24:42.815098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:37.282 [2024-11-04 10:24:42.815106] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:37.282 [2024-11-04 10:24:42.815115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:37.282 [2024-11-04 10:24:42.815130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:37.282 [2024-11-04 10:24:42.815137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:37.282 [2024-11-04 10:24:42.815144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:37.282 [2024-11-04 10:24:42.815150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:37.282 [2024-11-04 10:24:42.815157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:37.282 [2024-11-04 10:24:42.815164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:37.282 [2024-11-04 10:24:42.815171] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:37.282 [2024-11-04 10:24:42.815177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:37.282 [2024-11-04 10:24:42.815184] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815197] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:37.282 [2024-11-04 10:24:42.815218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:37.282 [2024-11-04 10:24:42.815226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:37.282 [2024-11-04 10:24:42.815242] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:37.282 [2024-11-04 10:24:42.815248] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:37.282 [2024-11-04 10:24:42.815255] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:37.282 [2024-11-04 10:24:42.815263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.815270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:37.282 [2024-11-04 10:24:42.815278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.603 ms 00:24:37.282 [2024-11-04 10:24:42.815285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.282 [2024-11-04 10:24:42.841310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.841355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:37.282 [2024-11-04 10:24:42.841366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.972 ms 00:24:37.282 [2024-11-04 10:24:42.841374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.282 [2024-11-04 10:24:42.841464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.841475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:37.282 [2024-11-04 10:24:42.841482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:24:37.282 [2024-11-04 10:24:42.841489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.282 [2024-11-04 10:24:42.881939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.881998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:37.282 [2024-11-04 10:24:42.882011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.392 ms 00:24:37.282 [2024-11-04 10:24:42.882022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.282 [2024-11-04 10:24:42.882073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.882083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:37.282 [2024-11-04 10:24:42.882092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:37.282 [2024-11-04 10:24:42.882100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.282 [2024-11-04 10:24:42.882454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.282 [2024-11-04 10:24:42.882479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:37.282 [2024-11-04 10:24:42.882489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.301 ms 00:24:37.282 [2024-11-04 10:24:42.882497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.882623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.882639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:37.283 [2024-11-04 10:24:42.882648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:24:37.283 [2024-11-04 10:24:42.882655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.895713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.895747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:37.283 [2024-11-04 10:24:42.895758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.015 ms 00:24:37.283 [2024-11-04 10:24:42.895766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.907768] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:37.283 [2024-11-04 10:24:42.907812] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:37.283 [2024-11-04 10:24:42.907824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.907831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:37.283 [2024-11-04 10:24:42.907840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.952 ms 00:24:37.283 [2024-11-04 10:24:42.907847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.932006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.932062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:37.283 [2024-11-04 10:24:42.932086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.118 ms 00:24:37.283 [2024-11-04 10:24:42.932094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.943798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.943831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:37.283 [2024-11-04 10:24:42.943841] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.654 ms 00:24:37.283 [2024-11-04 10:24:42.943849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.955100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.955132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:37.283 [2024-11-04 10:24:42.955142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.216 ms 00:24:37.283 [2024-11-04 10:24:42.955149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:42.955763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:42.955799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:37.283 [2024-11-04 10:24:42.955809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:24:37.283 [2024-11-04 10:24:42.955816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:43.010647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.283 [2024-11-04 10:24:43.010701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:37.283 [2024-11-04 10:24:43.010714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.813 ms 00:24:37.283 [2024-11-04 10:24:43.010722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.283 [2024-11-04 10:24:43.021360] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:37.541 [2024-11-04 10:24:43.024064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.024094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:37.541 [2024-11-04 10:24:43.024107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.287 ms 00:24:37.541 [2024-11-04 10:24:43.024116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.024214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.024226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:37.541 [2024-11-04 10:24:43.024234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:37.541 [2024-11-04 10:24:43.024242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.024303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.024314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:37.541 [2024-11-04 10:24:43.024322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:37.541 [2024-11-04 10:24:43.024330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.024367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.024377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:37.541 [2024-11-04 10:24:43.024388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:24:37.541 [2024-11-04 10:24:43.024395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.024425] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:37.541 [2024-11-04 10:24:43.024435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.024442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:37.541 [2024-11-04 10:24:43.024449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:37.541 [2024-11-04 10:24:43.024456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.047561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.047615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:37.541 [2024-11-04 10:24:43.047628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.084 ms 00:24:37.541 [2024-11-04 10:24:43.047636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.047719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:37.541 [2024-11-04 10:24:43.047730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:37.541 [2024-11-04 10:24:43.047739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:37.541 [2024-11-04 10:24:43.047746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:37.541 [2024-11-04 10:24:43.049893] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 258.156 ms, result 0 00:24:38.511  [2024-11-04T10:24:45.189Z] Copying: 44/1024 [MB] (44 MBps) [2024-11-04T10:24:46.121Z] Copying: 88/1024 [MB] (43 MBps) [2024-11-04T10:24:47.493Z] Copying: 132/1024 [MB] (44 MBps) [2024-11-04T10:24:48.425Z] Copying: 180/1024 [MB] (47 MBps) [2024-11-04T10:24:49.358Z] Copying: 206/1024 [MB] (25 MBps) [2024-11-04T10:24:50.288Z] Copying: 235/1024 [MB] (28 MBps) [2024-11-04T10:24:51.218Z] Copying: 258/1024 [MB] (23 MBps) [2024-11-04T10:24:52.149Z] Copying: 276/1024 [MB] (17 MBps) [2024-11-04T10:24:53.081Z] Copying: 298/1024 [MB] (22 MBps) [2024-11-04T10:24:54.454Z] Copying: 343/1024 [MB] (44 MBps) [2024-11-04T10:24:55.098Z] Copying: 370/1024 [MB] (27 MBps) [2024-11-04T10:24:56.468Z] Copying: 388/1024 [MB] (17 MBps) [2024-11-04T10:24:57.401Z] Copying: 408/1024 [MB] (20 MBps) [2024-11-04T10:24:58.336Z] Copying: 421/1024 [MB] (13 MBps) [2024-11-04T10:24:59.335Z] Copying: 440/1024 [MB] (18 MBps) [2024-11-04T10:25:00.267Z] Copying: 461/1024 [MB] (21 MBps) [2024-11-04T10:25:01.198Z] Copying: 486/1024 [MB] (24 MBps) [2024-11-04T10:25:02.128Z] Copying: 509/1024 [MB] (23 MBps) [2024-11-04T10:25:03.498Z] Copying: 526/1024 [MB] (16 MBps) [2024-11-04T10:25:04.062Z] Copying: 545/1024 [MB] (19 MBps) [2024-11-04T10:25:05.434Z] Copying: 556/1024 [MB] (11 MBps) [2024-11-04T10:25:06.366Z] Copying: 585/1024 [MB] (28 MBps) [2024-11-04T10:25:07.327Z] Copying: 631/1024 [MB] (45 MBps) [2024-11-04T10:25:08.260Z] Copying: 676/1024 [MB] (45 MBps) [2024-11-04T10:25:09.196Z] Copying: 722/1024 [MB] (45 MBps) [2024-11-04T10:25:10.128Z] Copying: 734/1024 [MB] (11 MBps) [2024-11-04T10:25:11.499Z] Copying: 763/1024 [MB] (28 MBps) [2024-11-04T10:25:12.063Z] Copying: 798/1024 [MB] (35 MBps) [2024-11-04T10:25:13.432Z] Copying: 840/1024 [MB] (41 MBps) [2024-11-04T10:25:14.364Z] Copying: 884/1024 [MB] (43 MBps) [2024-11-04T10:25:15.317Z] Copying: 930/1024 [MB] (46 MBps) [2024-11-04T10:25:16.254Z] Copying: 975/1024 [MB] (44 MBps) [2024-11-04T10:25:17.192Z] Copying: 1016/1024 [MB] (41 MBps) [2024-11-04T10:25:17.450Z] Copying: 1048244/1048576 [kB] (7224 kBps) [2024-11-04T10:25:17.450Z] Copying: 1024/1024 [MB] (average 29 MBps)[2024-11-04 10:25:17.445366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.705 [2024-11-04 10:25:17.445580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:11.705 [2024-11-04 10:25:17.445656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:11.705 [2024-11-04 10:25:17.445714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.705 [2024-11-04 10:25:17.446683] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:11.964 [2024-11-04 10:25:17.451353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.451463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:11.964 [2024-11-04 10:25:17.451521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.548 ms 00:25:11.964 [2024-11-04 10:25:17.451544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.463876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.463987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:11.964 [2024-11-04 10:25:17.464045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.194 ms 00:25:11.964 [2024-11-04 10:25:17.464068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.481494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.481605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:11.964 [2024-11-04 10:25:17.481679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.398 ms 00:25:11.964 [2024-11-04 10:25:17.481704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.487922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.488019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:11.964 [2024-11-04 10:25:17.488043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.154 ms 00:25:11.964 [2024-11-04 10:25:17.488051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.511592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.511642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:11.964 [2024-11-04 10:25:17.511655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.491 ms 00:25:11.964 [2024-11-04 10:25:17.511663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.533221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.533273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:11.964 [2024-11-04 10:25:17.533286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.517 ms 00:25:11.964 [2024-11-04 10:25:17.533295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.589825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.589887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:11.964 [2024-11-04 10:25:17.589900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.480 ms 00:25:11.964 [2024-11-04 10:25:17.589917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.614114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.964 [2024-11-04 10:25:17.614160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:11.964 [2024-11-04 10:25:17.614172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.181 ms 00:25:11.964 [2024-11-04 10:25:17.614179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.964 [2024-11-04 10:25:17.637628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.965 [2024-11-04 10:25:17.637685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:11.965 [2024-11-04 10:25:17.637697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.412 ms 00:25:11.965 [2024-11-04 10:25:17.637705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.965 [2024-11-04 10:25:17.659786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.965 [2024-11-04 10:25:17.659837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:11.965 [2024-11-04 10:25:17.659850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.036 ms 00:25:11.965 [2024-11-04 10:25:17.659858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.965 [2024-11-04 10:25:17.682437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.965 [2024-11-04 10:25:17.682482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:11.965 [2024-11-04 10:25:17.682494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.514 ms 00:25:11.965 [2024-11-04 10:25:17.682502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.965 [2024-11-04 10:25:17.682537] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:11.965 [2024-11-04 10:25:17.682552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:25:11.965 [2024-11-04 10:25:17.682562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.682997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:11.965 [2024-11-04 10:25:17.683158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:11.966 [2024-11-04 10:25:17.683313] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:11.966 [2024-11-04 10:25:17.683320] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96f32096-1409-4fdb-af0d-af6bfac0fff4 00:25:11.966 [2024-11-04 10:25:17.683328] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:25:11.966 [2024-11-04 10:25:17.683335] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128960 00:25:11.966 [2024-11-04 10:25:17.683353] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:25:11.966 [2024-11-04 10:25:17.683361] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:25:11.966 [2024-11-04 10:25:17.683368] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:11.966 [2024-11-04 10:25:17.683376] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:11.966 [2024-11-04 10:25:17.683383] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:11.966 [2024-11-04 10:25:17.683390] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:11.966 [2024-11-04 10:25:17.683396] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:11.966 [2024-11-04 10:25:17.683402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.966 [2024-11-04 10:25:17.683411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:11.966 [2024-11-04 10:25:17.683418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.866 ms 00:25:11.966 [2024-11-04 10:25:17.683425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.966 [2024-11-04 10:25:17.696142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.966 [2024-11-04 10:25:17.696190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:11.966 [2024-11-04 10:25:17.696201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.699 ms 00:25:11.966 [2024-11-04 10:25:17.696209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:11.966 [2024-11-04 10:25:17.696562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:11.966 [2024-11-04 10:25:17.696571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:11.966 [2024-11-04 10:25:17.696579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:25:11.966 [2024-11-04 10:25:17.696587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.223 [2024-11-04 10:25:17.729391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.223 [2024-11-04 10:25:17.729444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:12.223 [2024-11-04 10:25:17.729455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.223 [2024-11-04 10:25:17.729463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.223 [2024-11-04 10:25:17.729526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.223 [2024-11-04 10:25:17.729534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:12.223 [2024-11-04 10:25:17.729541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.223 [2024-11-04 10:25:17.729549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.223 [2024-11-04 10:25:17.729614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.223 [2024-11-04 10:25:17.729624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:12.223 [2024-11-04 10:25:17.729631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.223 [2024-11-04 10:25:17.729639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.729654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.729661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:12.224 [2024-11-04 10:25:17.729669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.729675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.808430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.808484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:12.224 [2024-11-04 10:25:17.808496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.808504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:12.224 [2024-11-04 10:25:17.872482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:12.224 [2024-11-04 10:25:17.872582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:12.224 [2024-11-04 10:25:17.872636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:12.224 [2024-11-04 10:25:17.872750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:12.224 [2024-11-04 10:25:17.872821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:12.224 [2024-11-04 10:25:17.872881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.872929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:12.224 [2024-11-04 10:25:17.872938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:12.224 [2024-11-04 10:25:17.872947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:12.224 [2024-11-04 10:25:17.872954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:12.224 [2024-11-04 10:25:17.873060] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 430.471 ms, result 0 00:25:14.787 00:25:14.787 00:25:14.787 10:25:20 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:17.325 10:25:22 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:17.325 [2024-11-04 10:25:22.720886] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:25:17.325 [2024-11-04 10:25:22.720985] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78441 ] 00:25:17.325 [2024-11-04 10:25:22.871815] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:17.325 [2024-11-04 10:25:22.978022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.583 [2024-11-04 10:25:23.247018] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.583 [2024-11-04 10:25:23.247082] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:17.842 [2024-11-04 10:25:23.401235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.401287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:17.842 [2024-11-04 10:25:23.401303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:17.842 [2024-11-04 10:25:23.401312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.401363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.401374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:17.842 [2024-11-04 10:25:23.401384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:25:17.842 [2024-11-04 10:25:23.401391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.401411] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:17.842 [2024-11-04 10:25:23.402123] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:17.842 [2024-11-04 10:25:23.402154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.402162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:17.842 [2024-11-04 10:25:23.402171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.747 ms 00:25:17.842 [2024-11-04 10:25:23.402178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.403275] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:17.842 [2024-11-04 10:25:23.415745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.415801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:17.842 [2024-11-04 10:25:23.415815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.471 ms 00:25:17.842 [2024-11-04 10:25:23.415825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.415897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.415910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:17.842 [2024-11-04 10:25:23.415918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:17.842 [2024-11-04 10:25:23.415925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.421703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.421743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:17.842 [2024-11-04 10:25:23.421755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.704 ms 00:25:17.842 [2024-11-04 10:25:23.421763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.421854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.421864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:17.842 [2024-11-04 10:25:23.421872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:17.842 [2024-11-04 10:25:23.421879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.421931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.421940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:17.842 [2024-11-04 10:25:23.421948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:17.842 [2024-11-04 10:25:23.421955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.421978] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:17.842 [2024-11-04 10:25:23.425345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.425382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:17.842 [2024-11-04 10:25:23.425392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.373 ms 00:25:17.842 [2024-11-04 10:25:23.425402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.425434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.425443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:17.842 [2024-11-04 10:25:23.425450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:17.842 [2024-11-04 10:25:23.425458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.425479] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:17.842 [2024-11-04 10:25:23.425498] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:17.842 [2024-11-04 10:25:23.425534] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:17.842 [2024-11-04 10:25:23.425552] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:17.842 [2024-11-04 10:25:23.425655] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:17.842 [2024-11-04 10:25:23.425665] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:17.842 [2024-11-04 10:25:23.425675] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:17.842 [2024-11-04 10:25:23.425686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:17.842 [2024-11-04 10:25:23.425695] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:17.842 [2024-11-04 10:25:23.425704] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:17.842 [2024-11-04 10:25:23.425711] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:17.842 [2024-11-04 10:25:23.425718] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:17.842 [2024-11-04 10:25:23.425725] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:17.842 [2024-11-04 10:25:23.425736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.425743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:17.842 [2024-11-04 10:25:23.425751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:25:17.842 [2024-11-04 10:25:23.425758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.425850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.842 [2024-11-04 10:25:23.425859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:17.842 [2024-11-04 10:25:23.425867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:25:17.842 [2024-11-04 10:25:23.425874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.842 [2024-11-04 10:25:23.426003] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:17.842 [2024-11-04 10:25:23.426022] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:17.842 [2024-11-04 10:25:23.426030] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.842 [2024-11-04 10:25:23.426038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.842 [2024-11-04 10:25:23.426046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:17.842 [2024-11-04 10:25:23.426053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:17.842 [2024-11-04 10:25:23.426060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:17.842 [2024-11-04 10:25:23.426067] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:17.843 [2024-11-04 10:25:23.426074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.843 [2024-11-04 10:25:23.426087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:17.843 [2024-11-04 10:25:23.426094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:17.843 [2024-11-04 10:25:23.426100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:17.843 [2024-11-04 10:25:23.426106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:17.843 [2024-11-04 10:25:23.426114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:17.843 [2024-11-04 10:25:23.426127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:17.843 [2024-11-04 10:25:23.426140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:17.843 [2024-11-04 10:25:23.426160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426166] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:17.843 [2024-11-04 10:25:23.426179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:17.843 [2024-11-04 10:25:23.426197] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426203] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:17.843 [2024-11-04 10:25:23.426216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:17.843 [2024-11-04 10:25:23.426235] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426241] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.843 [2024-11-04 10:25:23.426247] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:17.843 [2024-11-04 10:25:23.426253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:17.843 [2024-11-04 10:25:23.426259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:17.843 [2024-11-04 10:25:23.426266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:17.843 [2024-11-04 10:25:23.426272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:17.843 [2024-11-04 10:25:23.426278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:17.843 [2024-11-04 10:25:23.426290] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:17.843 [2024-11-04 10:25:23.426297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426303] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:17.843 [2024-11-04 10:25:23.426310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:17.843 [2024-11-04 10:25:23.426316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:17.843 [2024-11-04 10:25:23.426331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:17.843 [2024-11-04 10:25:23.426338] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:17.843 [2024-11-04 10:25:23.426345] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:17.843 [2024-11-04 10:25:23.426352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:17.843 [2024-11-04 10:25:23.426358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:17.843 [2024-11-04 10:25:23.426364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:17.843 [2024-11-04 10:25:23.426372] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:17.843 [2024-11-04 10:25:23.426381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426389] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:17.843 [2024-11-04 10:25:23.426396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:17.843 [2024-11-04 10:25:23.426403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:17.843 [2024-11-04 10:25:23.426410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:17.843 [2024-11-04 10:25:23.426416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:17.843 [2024-11-04 10:25:23.426424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:17.843 [2024-11-04 10:25:23.426430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:17.843 [2024-11-04 10:25:23.426437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:17.843 [2024-11-04 10:25:23.426444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:17.843 [2024-11-04 10:25:23.426451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:17.843 [2024-11-04 10:25:23.426486] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:17.843 [2024-11-04 10:25:23.426494] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426503] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:17.843 [2024-11-04 10:25:23.426510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:17.843 [2024-11-04 10:25:23.426517] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:17.843 [2024-11-04 10:25:23.426524] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:17.843 [2024-11-04 10:25:23.426531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.426539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:17.843 [2024-11-04 10:25:23.426546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.597 ms 00:25:17.843 [2024-11-04 10:25:23.426554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.452956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.453000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:17.843 [2024-11-04 10:25:23.453011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.357 ms 00:25:17.843 [2024-11-04 10:25:23.453018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.453106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.453118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:17.843 [2024-11-04 10:25:23.453126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:25:17.843 [2024-11-04 10:25:23.453133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.494499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.494550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:17.843 [2024-11-04 10:25:23.494563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.301 ms 00:25:17.843 [2024-11-04 10:25:23.494571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.494629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.494638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:17.843 [2024-11-04 10:25:23.494647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:17.843 [2024-11-04 10:25:23.494657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.495051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.495077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:17.843 [2024-11-04 10:25:23.495086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:25:17.843 [2024-11-04 10:25:23.495094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.495222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.495243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:17.843 [2024-11-04 10:25:23.495252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:25:17.843 [2024-11-04 10:25:23.495259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.508341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.843 [2024-11-04 10:25:23.508392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:17.843 [2024-11-04 10:25:23.508402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.059 ms 00:25:17.843 [2024-11-04 10:25:23.508412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.843 [2024-11-04 10:25:23.520728] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:17.843 [2024-11-04 10:25:23.520763] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:17.844 [2024-11-04 10:25:23.520774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.844 [2024-11-04 10:25:23.520792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:17.844 [2024-11-04 10:25:23.520802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.258 ms 00:25:17.844 [2024-11-04 10:25:23.520809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.844 [2024-11-04 10:25:23.544895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.844 [2024-11-04 10:25:23.544961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:17.844 [2024-11-04 10:25:23.544975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.043 ms 00:25:17.844 [2024-11-04 10:25:23.544983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.844 [2024-11-04 10:25:23.556809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.844 [2024-11-04 10:25:23.556858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:17.844 [2024-11-04 10:25:23.556869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.758 ms 00:25:17.844 [2024-11-04 10:25:23.556877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.844 [2024-11-04 10:25:23.568493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.844 [2024-11-04 10:25:23.568538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:17.844 [2024-11-04 10:25:23.568550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.575 ms 00:25:17.844 [2024-11-04 10:25:23.568558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:17.844 [2024-11-04 10:25:23.569216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:17.844 [2024-11-04 10:25:23.569242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:17.844 [2024-11-04 10:25:23.569251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.551 ms 00:25:17.844 [2024-11-04 10:25:23.569258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.625958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.626021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:18.102 [2024-11-04 10:25:23.626034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.677 ms 00:25:18.102 [2024-11-04 10:25:23.626049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.637014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:18.102 [2024-11-04 10:25:23.639755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.639813] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:18.102 [2024-11-04 10:25:23.639825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.644 ms 00:25:18.102 [2024-11-04 10:25:23.639833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.639936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.639953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:18.102 [2024-11-04 10:25:23.639962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:18.102 [2024-11-04 10:25:23.639970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.641441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.641475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:18.102 [2024-11-04 10:25:23.641484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.429 ms 00:25:18.102 [2024-11-04 10:25:23.641492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.641518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.641526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:18.102 [2024-11-04 10:25:23.641534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:18.102 [2024-11-04 10:25:23.641541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.641575] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:18.102 [2024-11-04 10:25:23.641587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.641595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:18.102 [2024-11-04 10:25:23.641602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:25:18.102 [2024-11-04 10:25:23.641609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.665381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.665451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:18.102 [2024-11-04 10:25:23.665463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.754 ms 00:25:18.102 [2024-11-04 10:25:23.665472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.665562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:18.102 [2024-11-04 10:25:23.665572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:18.102 [2024-11-04 10:25:23.665581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:25:18.102 [2024-11-04 10:25:23.665588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:18.102 [2024-11-04 10:25:23.666580] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 264.909 ms, result 0 00:25:19.475  [2024-11-04T10:25:26.151Z] Copying: 976/1048576 [kB] (976 kBps) [2024-11-04T10:25:27.118Z] Copying: 5212/1048576 [kB] (4236 kBps) [2024-11-04T10:25:28.052Z] Copying: 50/1024 [MB] (45 MBps) [2024-11-04T10:25:28.985Z] Copying: 111/1024 [MB] (60 MBps) [2024-11-04T10:25:29.917Z] Copying: 160/1024 [MB] (49 MBps) [2024-11-04T10:25:30.900Z] Copying: 211/1024 [MB] (51 MBps) [2024-11-04T10:25:32.273Z] Copying: 261/1024 [MB] (49 MBps) [2024-11-04T10:25:33.207Z] Copying: 311/1024 [MB] (50 MBps) [2024-11-04T10:25:34.140Z] Copying: 362/1024 [MB] (50 MBps) [2024-11-04T10:25:35.072Z] Copying: 411/1024 [MB] (49 MBps) [2024-11-04T10:25:36.012Z] Copying: 461/1024 [MB] (50 MBps) [2024-11-04T10:25:36.945Z] Copying: 513/1024 [MB] (51 MBps) [2024-11-04T10:25:37.893Z] Copying: 562/1024 [MB] (49 MBps) [2024-11-04T10:25:39.266Z] Copying: 609/1024 [MB] (47 MBps) [2024-11-04T10:25:40.195Z] Copying: 658/1024 [MB] (48 MBps) [2024-11-04T10:25:41.127Z] Copying: 705/1024 [MB] (46 MBps) [2024-11-04T10:25:42.061Z] Copying: 757/1024 [MB] (52 MBps) [2024-11-04T10:25:42.995Z] Copying: 808/1024 [MB] (51 MBps) [2024-11-04T10:25:43.930Z] Copying: 862/1024 [MB] (53 MBps) [2024-11-04T10:25:44.863Z] Copying: 912/1024 [MB] (50 MBps) [2024-11-04T10:25:46.236Z] Copying: 964/1024 [MB] (51 MBps) [2024-11-04T10:25:46.236Z] Copying: 1016/1024 [MB] (51 MBps) [2024-11-04T10:25:46.236Z] Copying: 1024/1024 [MB] (average 46 MBps)[2024-11-04 10:25:46.135533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.491 [2024-11-04 10:25:46.135597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:40.491 [2024-11-04 10:25:46.135615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:40.491 [2024-11-04 10:25:46.135623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.491 [2024-11-04 10:25:46.135644] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:40.491 [2024-11-04 10:25:46.138298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.491 [2024-11-04 10:25:46.138331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:40.491 [2024-11-04 10:25:46.138343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.639 ms 00:25:40.491 [2024-11-04 10:25:46.138351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.491 [2024-11-04 10:25:46.138570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.491 [2024-11-04 10:25:46.138585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:40.491 [2024-11-04 10:25:46.138594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.196 ms 00:25:40.491 [2024-11-04 10:25:46.138604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.491 [2024-11-04 10:25:46.149159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.491 [2024-11-04 10:25:46.149197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:40.492 [2024-11-04 10:25:46.149209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.539 ms 00:25:40.492 [2024-11-04 10:25:46.149217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.492 [2024-11-04 10:25:46.156253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.492 [2024-11-04 10:25:46.156284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:40.492 [2024-11-04 10:25:46.156295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.011 ms 00:25:40.492 [2024-11-04 10:25:46.156309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.492 [2024-11-04 10:25:46.179552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.492 [2024-11-04 10:25:46.179586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:40.492 [2024-11-04 10:25:46.179597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.206 ms 00:25:40.492 [2024-11-04 10:25:46.179605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.492 [2024-11-04 10:25:46.193303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.492 [2024-11-04 10:25:46.193336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:40.492 [2024-11-04 10:25:46.193347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.667 ms 00:25:40.492 [2024-11-04 10:25:46.193356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.492 [2024-11-04 10:25:46.194929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.492 [2024-11-04 10:25:46.194963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:40.492 [2024-11-04 10:25:46.194972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.538 ms 00:25:40.492 [2024-11-04 10:25:46.194979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.492 [2024-11-04 10:25:46.217581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.492 [2024-11-04 10:25:46.217612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:40.492 [2024-11-04 10:25:46.217621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.587 ms 00:25:40.492 [2024-11-04 10:25:46.217629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.751 [2024-11-04 10:25:46.239840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.751 [2024-11-04 10:25:46.239875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:40.751 [2024-11-04 10:25:46.239893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.180 ms 00:25:40.751 [2024-11-04 10:25:46.239901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.751 [2024-11-04 10:25:46.262257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.751 [2024-11-04 10:25:46.262290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:40.751 [2024-11-04 10:25:46.262300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.325 ms 00:25:40.751 [2024-11-04 10:25:46.262307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.751 [2024-11-04 10:25:46.284383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.751 [2024-11-04 10:25:46.284419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:40.751 [2024-11-04 10:25:46.284430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.023 ms 00:25:40.751 [2024-11-04 10:25:46.284438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.751 [2024-11-04 10:25:46.284469] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:40.751 [2024-11-04 10:25:46.284483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:40.751 [2024-11-04 10:25:46.284493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:40.751 [2024-11-04 10:25:46.284501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:40.751 [2024-11-04 10:25:46.284911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.284998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:40.752 [2024-11-04 10:25:46.285237] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:40.752 [2024-11-04 10:25:46.285245] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96f32096-1409-4fdb-af0d-af6bfac0fff4 00:25:40.752 [2024-11-04 10:25:46.285253] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:40.752 [2024-11-04 10:25:46.285260] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 136640 00:25:40.752 [2024-11-04 10:25:46.285267] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 134656 00:25:40.752 [2024-11-04 10:25:46.285275] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0147 00:25:40.752 [2024-11-04 10:25:46.285286] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:40.752 [2024-11-04 10:25:46.285294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:40.752 [2024-11-04 10:25:46.285301] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:40.752 [2024-11-04 10:25:46.285314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:40.752 [2024-11-04 10:25:46.285320] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:40.752 [2024-11-04 10:25:46.285327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.752 [2024-11-04 10:25:46.285335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:40.752 [2024-11-04 10:25:46.285343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:25:40.752 [2024-11-04 10:25:46.285350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.297677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.752 [2024-11-04 10:25:46.297711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:40.752 [2024-11-04 10:25:46.297726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.311 ms 00:25:40.752 [2024-11-04 10:25:46.297734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.298086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:40.752 [2024-11-04 10:25:46.298100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:40.752 [2024-11-04 10:25:46.298109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.334 ms 00:25:40.752 [2024-11-04 10:25:46.298116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.330178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.330215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:40.752 [2024-11-04 10:25:46.330225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.330233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.330294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.330302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:40.752 [2024-11-04 10:25:46.330309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.330316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.330375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.330387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:40.752 [2024-11-04 10:25:46.330395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.330402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.330416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.330424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:40.752 [2024-11-04 10:25:46.330431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.330438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.407828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.407889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:40.752 [2024-11-04 10:25:46.407901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.407909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.471313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.471361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:40.752 [2024-11-04 10:25:46.471372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.471380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.471454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.471463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:40.752 [2024-11-04 10:25:46.471475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.471482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.471514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.471522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:40.752 [2024-11-04 10:25:46.471529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.471536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.752 [2024-11-04 10:25:46.471618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.752 [2024-11-04 10:25:46.471627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:40.752 [2024-11-04 10:25:46.471635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.752 [2024-11-04 10:25:46.471644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.753 [2024-11-04 10:25:46.471671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.753 [2024-11-04 10:25:46.471684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:40.753 [2024-11-04 10:25:46.471692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.753 [2024-11-04 10:25:46.471700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.753 [2024-11-04 10:25:46.471731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.753 [2024-11-04 10:25:46.471740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:40.753 [2024-11-04 10:25:46.471748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.753 [2024-11-04 10:25:46.471755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.753 [2024-11-04 10:25:46.471812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:40.753 [2024-11-04 10:25:46.471827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:40.753 [2024-11-04 10:25:46.471835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:40.753 [2024-11-04 10:25:46.471842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:40.753 [2024-11-04 10:25:46.471949] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 336.389 ms, result 0 00:25:42.156 00:25:42.157 00:25:42.157 10:25:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:44.686 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:44.686 10:25:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:44.686 [2024-11-04 10:25:49.982011] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:25:44.686 [2024-11-04 10:25:49.982114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78723 ] 00:25:44.686 [2024-11-04 10:25:50.136455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.686 [2024-11-04 10:25:50.239012] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.944 [2024-11-04 10:25:50.494583] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.944 [2024-11-04 10:25:50.494646] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:44.944 [2024-11-04 10:25:50.647956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.648009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:44.944 [2024-11-04 10:25:50.648025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:44.944 [2024-11-04 10:25:50.648034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.648083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.648093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:44.944 [2024-11-04 10:25:50.648104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:25:44.944 [2024-11-04 10:25:50.648111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.648130] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:44.944 [2024-11-04 10:25:50.648865] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:44.944 [2024-11-04 10:25:50.648885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.648893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:44.944 [2024-11-04 10:25:50.648901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:25:44.944 [2024-11-04 10:25:50.648909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.650241] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:44.944 [2024-11-04 10:25:50.662610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.662651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:44.944 [2024-11-04 10:25:50.662665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.371 ms 00:25:44.944 [2024-11-04 10:25:50.662672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.662729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.662741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:44.944 [2024-11-04 10:25:50.662750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:25:44.944 [2024-11-04 10:25:50.662757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.667820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.667852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:44.944 [2024-11-04 10:25:50.667862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.988 ms 00:25:44.944 [2024-11-04 10:25:50.667869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.667943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.667952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:44.944 [2024-11-04 10:25:50.667960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:44.944 [2024-11-04 10:25:50.667968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.668010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.668019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:44.944 [2024-11-04 10:25:50.668027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:44.944 [2024-11-04 10:25:50.668034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.944 [2024-11-04 10:25:50.668056] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:44.944 [2024-11-04 10:25:50.671331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.944 [2024-11-04 10:25:50.671359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:44.945 [2024-11-04 10:25:50.671368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:25:44.945 [2024-11-04 10:25:50.671379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.945 [2024-11-04 10:25:50.671406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.945 [2024-11-04 10:25:50.671414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:44.945 [2024-11-04 10:25:50.671422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:44.945 [2024-11-04 10:25:50.671429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.945 [2024-11-04 10:25:50.671448] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:44.945 [2024-11-04 10:25:50.671466] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:44.945 [2024-11-04 10:25:50.671499] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:44.945 [2024-11-04 10:25:50.671516] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:44.945 [2024-11-04 10:25:50.671618] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:44.945 [2024-11-04 10:25:50.671628] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:44.945 [2024-11-04 10:25:50.671639] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:44.945 [2024-11-04 10:25:50.671648] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:44.945 [2024-11-04 10:25:50.671657] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:44.945 [2024-11-04 10:25:50.671665] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:44.945 [2024-11-04 10:25:50.671672] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:44.945 [2024-11-04 10:25:50.671679] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:44.945 [2024-11-04 10:25:50.671686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:44.945 [2024-11-04 10:25:50.671696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.945 [2024-11-04 10:25:50.671703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:44.945 [2024-11-04 10:25:50.671710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:25:44.945 [2024-11-04 10:25:50.671717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.945 [2024-11-04 10:25:50.671828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.945 [2024-11-04 10:25:50.671839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:44.945 [2024-11-04 10:25:50.671847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:25:44.945 [2024-11-04 10:25:50.671855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:44.945 [2024-11-04 10:25:50.671955] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:44.945 [2024-11-04 10:25:50.671967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:44.945 [2024-11-04 10:25:50.671975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.945 [2024-11-04 10:25:50.671983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.671991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:44.945 [2024-11-04 10:25:50.671998] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672011] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:44.945 [2024-11-04 10:25:50.672018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672024] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.945 [2024-11-04 10:25:50.672031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:44.945 [2024-11-04 10:25:50.672037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:44.945 [2024-11-04 10:25:50.672044] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:44.945 [2024-11-04 10:25:50.672051] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:44.945 [2024-11-04 10:25:50.672057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:44.945 [2024-11-04 10:25:50.672070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672077] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:44.945 [2024-11-04 10:25:50.672084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:44.945 [2024-11-04 10:25:50.672105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:44.945 [2024-11-04 10:25:50.672123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:44.945 [2024-11-04 10:25:50.672142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:44.945 [2024-11-04 10:25:50.672162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672168] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:44.945 [2024-11-04 10:25:50.672181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.945 [2024-11-04 10:25:50.672194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:44.945 [2024-11-04 10:25:50.672201] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:44.945 [2024-11-04 10:25:50.672207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:44.945 [2024-11-04 10:25:50.672214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:44.945 [2024-11-04 10:25:50.672220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:44.945 [2024-11-04 10:25:50.672227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:44.945 [2024-11-04 10:25:50.672239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:44.945 [2024-11-04 10:25:50.672245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672252] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:44.945 [2024-11-04 10:25:50.672263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:44.945 [2024-11-04 10:25:50.672271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672278] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:44.945 [2024-11-04 10:25:50.672285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:44.945 [2024-11-04 10:25:50.672294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:44.945 [2024-11-04 10:25:50.672300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:44.945 [2024-11-04 10:25:50.672308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:44.945 [2024-11-04 10:25:50.672315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:44.945 [2024-11-04 10:25:50.672321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:44.945 [2024-11-04 10:25:50.672329] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:44.945 [2024-11-04 10:25:50.672338] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:44.945 [2024-11-04 10:25:50.672353] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:44.945 [2024-11-04 10:25:50.672379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:44.945 [2024-11-04 10:25:50.672387] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:44.945 [2024-11-04 10:25:50.672394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:44.945 [2024-11-04 10:25:50.672402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:44.945 [2024-11-04 10:25:50.672409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:44.945 [2024-11-04 10:25:50.672416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:44.945 [2024-11-04 10:25:50.672423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:44.945 [2024-11-04 10:25:50.672430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672444] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:44.945 [2024-11-04 10:25:50.672465] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:44.945 [2024-11-04 10:25:50.672473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672483] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:44.945 [2024-11-04 10:25:50.672490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:44.945 [2024-11-04 10:25:50.672497] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:44.945 [2024-11-04 10:25:50.672504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:44.945 [2024-11-04 10:25:50.672512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:44.945 [2024-11-04 10:25:50.672519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:44.945 [2024-11-04 10:25:50.672527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:25:44.945 [2024-11-04 10:25:50.672534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.698711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.698754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:45.204 [2024-11-04 10:25:50.698766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.122 ms 00:25:45.204 [2024-11-04 10:25:50.698774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.698871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.698883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:45.204 [2024-11-04 10:25:50.698891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:25:45.204 [2024-11-04 10:25:50.698899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.739098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.739146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:45.204 [2024-11-04 10:25:50.739160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.140 ms 00:25:45.204 [2024-11-04 10:25:50.739168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.739218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.739227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:45.204 [2024-11-04 10:25:50.739236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:25:45.204 [2024-11-04 10:25:50.739246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.739614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.739630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:45.204 [2024-11-04 10:25:50.739639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:45.204 [2024-11-04 10:25:50.739646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.739772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.739804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:45.204 [2024-11-04 10:25:50.739818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:25:45.204 [2024-11-04 10:25:50.739826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.752865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.752898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:45.204 [2024-11-04 10:25:50.752909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.014 ms 00:25:45.204 [2024-11-04 10:25:50.752919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.764954] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:45.204 [2024-11-04 10:25:50.764990] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:45.204 [2024-11-04 10:25:50.765001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.765009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:45.204 [2024-11-04 10:25:50.765017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.968 ms 00:25:45.204 [2024-11-04 10:25:50.765024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.789006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.789062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:45.204 [2024-11-04 10:25:50.789073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.941 ms 00:25:45.204 [2024-11-04 10:25:50.789081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.800959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.800995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:45.204 [2024-11-04 10:25:50.801006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.820 ms 00:25:45.204 [2024-11-04 10:25:50.801013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.811955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.812000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:45.204 [2024-11-04 10:25:50.812010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.909 ms 00:25:45.204 [2024-11-04 10:25:50.812017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.812628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.812650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:45.204 [2024-11-04 10:25:50.812659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:25:45.204 [2024-11-04 10:25:50.812667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.867768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.867848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:45.204 [2024-11-04 10:25:50.867861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.080 ms 00:25:45.204 [2024-11-04 10:25:50.867875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.878029] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:45.204 [2024-11-04 10:25:50.880513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.880548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:45.204 [2024-11-04 10:25:50.880560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.593 ms 00:25:45.204 [2024-11-04 10:25:50.880568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.880660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.880670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:45.204 [2024-11-04 10:25:50.880678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:25:45.204 [2024-11-04 10:25:50.880686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.881257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.881283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:45.204 [2024-11-04 10:25:50.881292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:25:45.204 [2024-11-04 10:25:50.881299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.881321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.881330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:45.204 [2024-11-04 10:25:50.881338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:45.204 [2024-11-04 10:25:50.881345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.881377] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:45.204 [2024-11-04 10:25:50.881388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.881396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:45.204 [2024-11-04 10:25:50.881404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:45.204 [2024-11-04 10:25:50.881411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.904629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.904682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:45.204 [2024-11-04 10:25:50.904696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.200 ms 00:25:45.204 [2024-11-04 10:25:50.904705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.904800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:45.204 [2024-11-04 10:25:50.904811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:45.204 [2024-11-04 10:25:50.904819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:25:45.204 [2024-11-04 10:25:50.904826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:45.204 [2024-11-04 10:25:50.905775] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 257.404 ms, result 0 00:25:46.603  [2024-11-04T10:25:53.282Z] Copying: 48/1024 [MB] (48 MBps) [2024-11-04T10:25:54.226Z] Copying: 93/1024 [MB] (45 MBps) [2024-11-04T10:25:55.180Z] Copying: 137/1024 [MB] (43 MBps) [2024-11-04T10:25:56.122Z] Copying: 182/1024 [MB] (44 MBps) [2024-11-04T10:25:57.502Z] Copying: 223/1024 [MB] (41 MBps) [2024-11-04T10:25:58.438Z] Copying: 265/1024 [MB] (41 MBps) [2024-11-04T10:25:59.375Z] Copying: 300/1024 [MB] (35 MBps) [2024-11-04T10:26:00.339Z] Copying: 335/1024 [MB] (34 MBps) [2024-11-04T10:26:01.277Z] Copying: 373/1024 [MB] (37 MBps) [2024-11-04T10:26:02.215Z] Copying: 398/1024 [MB] (25 MBps) [2024-11-04T10:26:03.150Z] Copying: 417/1024 [MB] (19 MBps) [2024-11-04T10:26:04.129Z] Copying: 445/1024 [MB] (28 MBps) [2024-11-04T10:26:05.507Z] Copying: 469/1024 [MB] (24 MBps) [2024-11-04T10:26:06.445Z] Copying: 496/1024 [MB] (27 MBps) [2024-11-04T10:26:07.389Z] Copying: 518/1024 [MB] (21 MBps) [2024-11-04T10:26:08.333Z] Copying: 532/1024 [MB] (14 MBps) [2024-11-04T10:26:09.275Z] Copying: 555/1024 [MB] (22 MBps) [2024-11-04T10:26:10.216Z] Copying: 572/1024 [MB] (17 MBps) [2024-11-04T10:26:11.166Z] Copying: 585/1024 [MB] (13 MBps) [2024-11-04T10:26:12.129Z] Copying: 600/1024 [MB] (14 MBps) [2024-11-04T10:26:13.513Z] Copying: 611/1024 [MB] (10 MBps) [2024-11-04T10:26:14.084Z] Copying: 636332/1048576 [kB] (9984 kBps) [2024-11-04T10:26:15.495Z] Copying: 645196/1048576 [kB] (8864 kBps) [2024-11-04T10:26:16.437Z] Copying: 654752/1048576 [kB] (9556 kBps) [2024-11-04T10:26:17.380Z] Copying: 649/1024 [MB] (10 MBps) [2024-11-04T10:26:18.324Z] Copying: 675072/1048576 [kB] (9488 kBps) [2024-11-04T10:26:19.272Z] Copying: 684388/1048576 [kB] (9316 kBps) [2024-11-04T10:26:20.215Z] Copying: 694128/1048576 [kB] (9740 kBps) [2024-11-04T10:26:21.199Z] Copying: 704200/1048576 [kB] (10072 kBps) [2024-11-04T10:26:22.138Z] Copying: 713996/1048576 [kB] (9796 kBps) [2024-11-04T10:26:23.524Z] Copying: 707/1024 [MB] (10 MBps) [2024-11-04T10:26:24.098Z] Copying: 733736/1048576 [kB] (9308 kBps) [2024-11-04T10:26:25.523Z] Copying: 743444/1048576 [kB] (9708 kBps) [2024-11-04T10:26:26.118Z] Copying: 753128/1048576 [kB] (9684 kBps) [2024-11-04T10:26:27.502Z] Copying: 746/1024 [MB] (10 MBps) [2024-11-04T10:26:28.444Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-04T10:26:29.385Z] Copying: 784532/1048576 [kB] (9796 kBps) [2024-11-04T10:26:30.327Z] Copying: 794208/1048576 [kB] (9676 kBps) [2024-11-04T10:26:31.271Z] Copying: 804332/1048576 [kB] (10124 kBps) [2024-11-04T10:26:32.292Z] Copying: 814024/1048576 [kB] (9692 kBps) [2024-11-04T10:26:33.234Z] Copying: 823756/1048576 [kB] (9732 kBps) [2024-11-04T10:26:34.177Z] Copying: 815/1024 [MB] (11 MBps) [2024-11-04T10:26:35.154Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-04T10:26:36.098Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-04T10:26:37.484Z] Copying: 866144/1048576 [kB] (9792 kBps) [2024-11-04T10:26:38.425Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-04T10:26:39.369Z] Copying: 886064/1048576 [kB] (9580 kBps) [2024-11-04T10:26:40.307Z] Copying: 896024/1048576 [kB] (9960 kBps) [2024-11-04T10:26:41.249Z] Copying: 885/1024 [MB] (10 MBps) [2024-11-04T10:26:42.191Z] Copying: 916620/1048576 [kB] (10024 kBps) [2024-11-04T10:26:43.133Z] Copying: 906/1024 [MB] (11 MBps) [2024-11-04T10:26:44.519Z] Copying: 938208/1048576 [kB] (9980 kBps) [2024-11-04T10:26:45.091Z] Copying: 929/1024 [MB] (13 MBps) [2024-11-04T10:26:46.476Z] Copying: 940/1024 [MB] (11 MBps) [2024-11-04T10:26:47.415Z] Copying: 972972/1048576 [kB] (9496 kBps) [2024-11-04T10:26:48.356Z] Copying: 982624/1048576 [kB] (9652 kBps) [2024-11-04T10:26:49.356Z] Copying: 992064/1048576 [kB] (9440 kBps) [2024-11-04T10:26:50.297Z] Copying: 979/1024 [MB] (10 MBps) [2024-11-04T10:26:51.236Z] Copying: 1012700/1048576 [kB] (9648 kBps) [2024-11-04T10:26:52.203Z] Copying: 999/1024 [MB] (10 MBps) [2024-11-04T10:26:53.144Z] Copying: 1032956/1048576 [kB] (9648 kBps) [2024-11-04T10:26:53.719Z] Copying: 1042664/1048576 [kB] (9708 kBps) [2024-11-04T10:26:53.719Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-04 10:26:53.667631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.667751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:47.974 [2024-11-04 10:26:53.667772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:47.974 [2024-11-04 10:26:53.667805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.974 [2024-11-04 10:26:53.667835] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:47.974 [2024-11-04 10:26:53.671138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.671199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:47.974 [2024-11-04 10:26:53.671213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.281 ms 00:26:47.974 [2024-11-04 10:26:53.671232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.974 [2024-11-04 10:26:53.671495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.671507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:47.974 [2024-11-04 10:26:53.671518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:26:47.974 [2024-11-04 10:26:53.671526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.974 [2024-11-04 10:26:53.675692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.675728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:47.974 [2024-11-04 10:26:53.675740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.151 ms 00:26:47.974 [2024-11-04 10:26:53.675749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.974 [2024-11-04 10:26:53.682114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.682169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:47.974 [2024-11-04 10:26:53.682182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.337 ms 00:26:47.974 [2024-11-04 10:26:53.682190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:47.974 [2024-11-04 10:26:53.710297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:47.974 [2024-11-04 10:26:53.710380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:47.974 [2024-11-04 10:26:53.710399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.029 ms 00:26:47.974 [2024-11-04 10:26:53.710407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.727425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.727501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:48.237 [2024-11-04 10:26:53.727519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.949 ms 00:26:48.237 [2024-11-04 10:26:53.727528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.732847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.732911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:48.237 [2024-11-04 10:26:53.732936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.275 ms 00:26:48.237 [2024-11-04 10:26:53.732946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.761775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.761870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:48.237 [2024-11-04 10:26:53.761886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.809 ms 00:26:48.237 [2024-11-04 10:26:53.761895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.788675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.788767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:48.237 [2024-11-04 10:26:53.788805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.706 ms 00:26:48.237 [2024-11-04 10:26:53.788815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.815387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.815478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:48.237 [2024-11-04 10:26:53.815494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.493 ms 00:26:48.237 [2024-11-04 10:26:53.815503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.842473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.237 [2024-11-04 10:26:53.842546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:48.237 [2024-11-04 10:26:53.842563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.855 ms 00:26:48.237 [2024-11-04 10:26:53.842572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.237 [2024-11-04 10:26:53.842641] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:48.237 [2024-11-04 10:26:53.842658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:48.237 [2024-11-04 10:26:53.842679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:48.237 [2024-11-04 10:26:53.842689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.842996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:48.237 [2024-11-04 10:26:53.843041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:48.238 [2024-11-04 10:26:53.843516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:48.238 [2024-11-04 10:26:53.843525] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 96f32096-1409-4fdb-af0d-af6bfac0fff4 00:26:48.238 [2024-11-04 10:26:53.843538] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:48.238 [2024-11-04 10:26:53.843546] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:48.238 [2024-11-04 10:26:53.843554] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:48.238 [2024-11-04 10:26:53.843562] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:48.238 [2024-11-04 10:26:53.843570] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:48.238 [2024-11-04 10:26:53.843579] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:48.238 [2024-11-04 10:26:53.843597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:48.238 [2024-11-04 10:26:53.843605] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:48.238 [2024-11-04 10:26:53.843612] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:48.238 [2024-11-04 10:26:53.843620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.238 [2024-11-04 10:26:53.843628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:48.238 [2024-11-04 10:26:53.843638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.980 ms 00:26:48.238 [2024-11-04 10:26:53.843646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.857654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.238 [2024-11-04 10:26:53.857729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:48.238 [2024-11-04 10:26:53.857745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.983 ms 00:26:48.238 [2024-11-04 10:26:53.857754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.858239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:48.238 [2024-11-04 10:26:53.858259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:48.238 [2024-11-04 10:26:53.858270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:26:48.238 [2024-11-04 10:26:53.858290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.895009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.238 [2024-11-04 10:26:53.895094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:48.238 [2024-11-04 10:26:53.895112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.238 [2024-11-04 10:26:53.895122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.895213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.238 [2024-11-04 10:26:53.895224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:48.238 [2024-11-04 10:26:53.895234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.238 [2024-11-04 10:26:53.895249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.895374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.238 [2024-11-04 10:26:53.895385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:48.238 [2024-11-04 10:26:53.895395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.238 [2024-11-04 10:26:53.895403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.238 [2024-11-04 10:26:53.895420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.238 [2024-11-04 10:26:53.895429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:48.238 [2024-11-04 10:26:53.895437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.238 [2024-11-04 10:26:53.895445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.499 [2024-11-04 10:26:53.983456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.499 [2024-11-04 10:26:53.983528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:48.499 [2024-11-04 10:26:53.983547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.499 [2024-11-04 10:26:53.983556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.055449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.055528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:48.500 [2024-11-04 10:26:54.055543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.055559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.055632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.055644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:48.500 [2024-11-04 10:26:54.055653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.055662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.055722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.055732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:48.500 [2024-11-04 10:26:54.055741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.055749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.055882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.055895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:48.500 [2024-11-04 10:26:54.055903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.055912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.055946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.055958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:48.500 [2024-11-04 10:26:54.055967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.055976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.056022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.056051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:48.500 [2024-11-04 10:26:54.056060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.056069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.056116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:48.500 [2024-11-04 10:26:54.056128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:48.500 [2024-11-04 10:26:54.056138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:48.500 [2024-11-04 10:26:54.056147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:48.500 [2024-11-04 10:26:54.056287] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 388.623 ms, result 0 00:26:49.067 00:26:49.067 00:26:49.067 10:26:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:51.610 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:26:51.610 10:26:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:26:51.610 10:26:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:26:51.610 10:26:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:51.610 10:26:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77344 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77344 ']' 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77344 00:26:51.610 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77344) - No such process 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77344 is not found' 00:26:51.610 Process with pid 77344 is not found 00:26:51.610 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:26:51.867 Remove shared memory files 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:26:51.867 00:26:51.867 real 3m16.179s 00:26:51.867 user 3m36.964s 00:26:51.867 sys 0m24.190s 00:26:51.867 ************************************ 00:26:51.867 END TEST ftl_dirty_shutdown 00:26:51.867 ************************************ 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:51.867 10:26:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:51.868 10:26:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:51.868 10:26:57 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:26:51.868 10:26:57 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:51.868 10:26:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:26:51.868 ************************************ 00:26:51.868 START TEST ftl_upgrade_shutdown 00:26:51.868 ************************************ 00:26:51.868 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:26:51.868 * Looking for test storage... 00:26:51.868 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:26:51.868 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:51.868 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:26:51.868 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.126 --rc genhtml_branch_coverage=1 00:26:52.126 --rc genhtml_function_coverage=1 00:26:52.126 --rc genhtml_legend=1 00:26:52.126 --rc geninfo_all_blocks=1 00:26:52.126 --rc geninfo_unexecuted_blocks=1 00:26:52.126 00:26:52.126 ' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.126 --rc genhtml_branch_coverage=1 00:26:52.126 --rc genhtml_function_coverage=1 00:26:52.126 --rc genhtml_legend=1 00:26:52.126 --rc geninfo_all_blocks=1 00:26:52.126 --rc geninfo_unexecuted_blocks=1 00:26:52.126 00:26:52.126 ' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.126 --rc genhtml_branch_coverage=1 00:26:52.126 --rc genhtml_function_coverage=1 00:26:52.126 --rc genhtml_legend=1 00:26:52.126 --rc geninfo_all_blocks=1 00:26:52.126 --rc geninfo_unexecuted_blocks=1 00:26:52.126 00:26:52.126 ' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:52.126 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:52.126 --rc genhtml_branch_coverage=1 00:26:52.126 --rc genhtml_function_coverage=1 00:26:52.126 --rc genhtml_legend=1 00:26:52.126 --rc geninfo_all_blocks=1 00:26:52.126 --rc geninfo_unexecuted_blocks=1 00:26:52.126 00:26:52.126 ' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:52.126 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79476 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79476 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79476 ']' 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:52.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:26:52.127 10:26:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:26:52.127 [2024-11-04 10:26:57.768072] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:26:52.127 [2024-11-04 10:26:57.768199] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79476 ] 00:26:52.385 [2024-11-04 10:26:57.927944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:52.385 [2024-11-04 10:26:58.091411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:52.951 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:52.951 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:26:52.952 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:26:53.210 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:26:53.503 10:26:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:26:53.760 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:53.760 { 00:26:53.760 "name": "basen1", 00:26:53.760 "aliases": [ 00:26:53.760 "6f3de764-5050-4449-9618-467f35dad99b" 00:26:53.760 ], 00:26:53.760 "product_name": "NVMe disk", 00:26:53.760 "block_size": 4096, 00:26:53.760 "num_blocks": 1310720, 00:26:53.760 "uuid": "6f3de764-5050-4449-9618-467f35dad99b", 00:26:53.760 "numa_id": -1, 00:26:53.760 "assigned_rate_limits": { 00:26:53.760 "rw_ios_per_sec": 0, 00:26:53.760 "rw_mbytes_per_sec": 0, 00:26:53.760 "r_mbytes_per_sec": 0, 00:26:53.760 "w_mbytes_per_sec": 0 00:26:53.760 }, 00:26:53.760 "claimed": true, 00:26:53.760 "claim_type": "read_many_write_one", 00:26:53.760 "zoned": false, 00:26:53.760 "supported_io_types": { 00:26:53.760 "read": true, 00:26:53.760 "write": true, 00:26:53.760 "unmap": true, 00:26:53.760 "flush": true, 00:26:53.760 "reset": true, 00:26:53.760 "nvme_admin": true, 00:26:53.760 "nvme_io": true, 00:26:53.760 "nvme_io_md": false, 00:26:53.760 "write_zeroes": true, 00:26:53.760 "zcopy": false, 00:26:53.760 "get_zone_info": false, 00:26:53.760 "zone_management": false, 00:26:53.760 "zone_append": false, 00:26:53.760 "compare": true, 00:26:53.760 "compare_and_write": false, 00:26:53.761 "abort": true, 00:26:53.761 "seek_hole": false, 00:26:53.761 "seek_data": false, 00:26:53.761 "copy": true, 00:26:53.761 "nvme_iov_md": false 00:26:53.761 }, 00:26:53.761 "driver_specific": { 00:26:53.761 "nvme": [ 00:26:53.761 { 00:26:53.761 "pci_address": "0000:00:11.0", 00:26:53.761 "trid": { 00:26:53.761 "trtype": "PCIe", 00:26:53.761 "traddr": "0000:00:11.0" 00:26:53.761 }, 00:26:53.761 "ctrlr_data": { 00:26:53.761 "cntlid": 0, 00:26:53.761 "vendor_id": "0x1b36", 00:26:53.761 "model_number": "QEMU NVMe Ctrl", 00:26:53.761 "serial_number": "12341", 00:26:53.761 "firmware_revision": "8.0.0", 00:26:53.761 "subnqn": "nqn.2019-08.org.qemu:12341", 00:26:53.761 "oacs": { 00:26:53.761 "security": 0, 00:26:53.761 "format": 1, 00:26:53.761 "firmware": 0, 00:26:53.761 "ns_manage": 1 00:26:53.761 }, 00:26:53.761 "multi_ctrlr": false, 00:26:53.761 "ana_reporting": false 00:26:53.761 }, 00:26:53.761 "vs": { 00:26:53.761 "nvme_version": "1.4" 00:26:53.761 }, 00:26:53.761 "ns_data": { 00:26:53.761 "id": 1, 00:26:53.761 "can_share": false 00:26:53.761 } 00:26:53.761 } 00:26:53.761 ], 00:26:53.761 "mp_policy": "active_passive" 00:26:53.761 } 00:26:53.761 } 00:26:53.761 ]' 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:26:53.761 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:54.019 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=fb0e384f-1fc4-49c0-b085-29ef655510cb 00:26:54.019 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:26:54.019 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb0e384f-1fc4-49c0-b085-29ef655510cb 00:26:54.277 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:26:54.277 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=9365aa6c-2fa5-49dd-a2f3-d8cdaa99eb03 00:26:54.277 10:26:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 9365aa6c-2fa5-49dd-a2f3-d8cdaa99eb03 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=153813a5-5831-4263-9a56-8a2449472dc3 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 153813a5-5831-4263-9a56-8a2449472dc3 ]] 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 153813a5-5831-4263-9a56-8a2449472dc3 5120 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=153813a5-5831-4263-9a56-8a2449472dc3 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 153813a5-5831-4263-9a56-8a2449472dc3 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=153813a5-5831-4263-9a56-8a2449472dc3 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:26:54.536 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 153813a5-5831-4263-9a56-8a2449472dc3 00:26:54.794 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:26:54.794 { 00:26:54.794 "name": "153813a5-5831-4263-9a56-8a2449472dc3", 00:26:54.794 "aliases": [ 00:26:54.794 "lvs/basen1p0" 00:26:54.794 ], 00:26:54.794 "product_name": "Logical Volume", 00:26:54.794 "block_size": 4096, 00:26:54.794 "num_blocks": 5242880, 00:26:54.794 "uuid": "153813a5-5831-4263-9a56-8a2449472dc3", 00:26:54.794 "assigned_rate_limits": { 00:26:54.794 "rw_ios_per_sec": 0, 00:26:54.794 "rw_mbytes_per_sec": 0, 00:26:54.794 "r_mbytes_per_sec": 0, 00:26:54.794 "w_mbytes_per_sec": 0 00:26:54.794 }, 00:26:54.794 "claimed": false, 00:26:54.794 "zoned": false, 00:26:54.794 "supported_io_types": { 00:26:54.794 "read": true, 00:26:54.794 "write": true, 00:26:54.794 "unmap": true, 00:26:54.794 "flush": false, 00:26:54.794 "reset": true, 00:26:54.794 "nvme_admin": false, 00:26:54.794 "nvme_io": false, 00:26:54.794 "nvme_io_md": false, 00:26:54.794 "write_zeroes": true, 00:26:54.794 "zcopy": false, 00:26:54.795 "get_zone_info": false, 00:26:54.795 "zone_management": false, 00:26:54.795 "zone_append": false, 00:26:54.795 "compare": false, 00:26:54.795 "compare_and_write": false, 00:26:54.795 "abort": false, 00:26:54.795 "seek_hole": true, 00:26:54.795 "seek_data": true, 00:26:54.795 "copy": false, 00:26:54.795 "nvme_iov_md": false 00:26:54.795 }, 00:26:54.795 "driver_specific": { 00:26:54.795 "lvol": { 00:26:54.795 "lvol_store_uuid": "9365aa6c-2fa5-49dd-a2f3-d8cdaa99eb03", 00:26:54.795 "base_bdev": "basen1", 00:26:54.795 "thin_provision": true, 00:26:54.795 "num_allocated_clusters": 0, 00:26:54.795 "snapshot": false, 00:26:54.795 "clone": false, 00:26:54.795 "esnap_clone": false 00:26:54.795 } 00:26:54.795 } 00:26:54.795 } 00:26:54.795 ]' 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:26:54.795 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:26:55.053 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:26:55.053 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:26:55.053 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:26:55.312 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:26:55.312 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:26:55.312 10:27:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 153813a5-5831-4263-9a56-8a2449472dc3 -c cachen1p0 --l2p_dram_limit 2 00:26:55.571 [2024-11-04 10:27:01.180838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.181029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:26:55.571 [2024-11-04 10:27:01.181052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:55.571 [2024-11-04 10:27:01.181062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.181126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.181136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:26:55.571 [2024-11-04 10:27:01.181147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.042 ms 00:26:55.571 [2024-11-04 10:27:01.181155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.181177] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:26:55.571 [2024-11-04 10:27:01.181880] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:26:55.571 [2024-11-04 10:27:01.181905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.181915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:26:55.571 [2024-11-04 10:27:01.181925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.730 ms 00:26:55.571 [2024-11-04 10:27:01.181932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.181966] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 7ca18fac-4210-400c-abce-904d8579156b 00:26:55.571 [2024-11-04 10:27:01.183056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.183088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:26:55.571 [2024-11-04 10:27:01.183098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:55.571 [2024-11-04 10:27:01.183108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.188428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.188581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:26:55.571 [2024-11-04 10:27:01.188596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.271 ms 00:26:55.571 [2024-11-04 10:27:01.188609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.188694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.188706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:26:55.571 [2024-11-04 10:27:01.188714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:55.571 [2024-11-04 10:27:01.188725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.188797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.188812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:26:55.571 [2024-11-04 10:27:01.188820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:26:55.571 [2024-11-04 10:27:01.188833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.188857] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:26:55.571 [2024-11-04 10:27:01.192425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.192455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:26:55.571 [2024-11-04 10:27:01.192467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.574 ms 00:26:55.571 [2024-11-04 10:27:01.192479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.192508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.571 [2024-11-04 10:27:01.192518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:26:55.571 [2024-11-04 10:27:01.192529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:55.571 [2024-11-04 10:27:01.192537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.571 [2024-11-04 10:27:01.192556] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:26:55.571 [2024-11-04 10:27:01.192696] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:26:55.571 [2024-11-04 10:27:01.192713] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:26:55.572 [2024-11-04 10:27:01.192725] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:26:55.572 [2024-11-04 10:27:01.192738] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:26:55.572 [2024-11-04 10:27:01.192749] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:26:55.572 [2024-11-04 10:27:01.192760] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:26:55.572 [2024-11-04 10:27:01.192768] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:26:55.572 [2024-11-04 10:27:01.192778] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:26:55.572 [2024-11-04 10:27:01.192803] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:26:55.572 [2024-11-04 10:27:01.192816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.572 [2024-11-04 10:27:01.192824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:26:55.572 [2024-11-04 10:27:01.192835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:26:55.572 [2024-11-04 10:27:01.192845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.572 [2024-11-04 10:27:01.192931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.572 [2024-11-04 10:27:01.192940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:26:55.572 [2024-11-04 10:27:01.192952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:26:55.572 [2024-11-04 10:27:01.192967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.572 [2024-11-04 10:27:01.193083] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:26:55.572 [2024-11-04 10:27:01.193096] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:26:55.572 [2024-11-04 10:27:01.193107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193128] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:26:55.572 [2024-11-04 10:27:01.193136] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:26:55.572 [2024-11-04 10:27:01.193156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:26:55.572 [2024-11-04 10:27:01.193166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:26:55.572 [2024-11-04 10:27:01.193174] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:26:55.572 [2024-11-04 10:27:01.193191] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:26:55.572 [2024-11-04 10:27:01.193200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:26:55.572 [2024-11-04 10:27:01.193217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:26:55.572 [2024-11-04 10:27:01.193225] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:26:55.572 [2024-11-04 10:27:01.193244] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:26:55.572 [2024-11-04 10:27:01.193254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:26:55.572 [2024-11-04 10:27:01.193270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193285] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:26:55.572 [2024-11-04 10:27:01.193291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:26:55.572 [2024-11-04 10:27:01.193315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:26:55.572 [2024-11-04 10:27:01.193335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193342] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:26:55.572 [2024-11-04 10:27:01.193358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:26:55.572 [2024-11-04 10:27:01.193379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:26:55.572 [2024-11-04 10:27:01.193401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193416] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:26:55.572 [2024-11-04 10:27:01.193422] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:26:55.572 [2024-11-04 10:27:01.193430] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193436] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:26:55.572 [2024-11-04 10:27:01.193445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:26:55.572 [2024-11-04 10:27:01.193452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:26:55.572 [2024-11-04 10:27:01.193468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:26:55.572 [2024-11-04 10:27:01.193479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:26:55.572 [2024-11-04 10:27:01.193486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:26:55.572 [2024-11-04 10:27:01.193495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:26:55.572 [2024-11-04 10:27:01.193501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:26:55.572 [2024-11-04 10:27:01.193510] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:26:55.572 [2024-11-04 10:27:01.193519] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:26:55.572 [2024-11-04 10:27:01.193530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:26:55.572 [2024-11-04 10:27:01.193548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:26:55.572 [2024-11-04 10:27:01.193570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:26:55.572 [2024-11-04 10:27:01.193579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:26:55.572 [2024-11-04 10:27:01.193586] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:26:55.572 [2024-11-04 10:27:01.193594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193644] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:26:55.572 [2024-11-04 10:27:01.193650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:26:55.572 [2024-11-04 10:27:01.193660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193671] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:55.572 [2024-11-04 10:27:01.193679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:26:55.572 [2024-11-04 10:27:01.193687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:26:55.572 [2024-11-04 10:27:01.193695] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:26:55.572 [2024-11-04 10:27:01.193702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:55.572 [2024-11-04 10:27:01.193710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:26:55.572 [2024-11-04 10:27:01.193718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.689 ms 00:26:55.572 [2024-11-04 10:27:01.193726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:55.572 [2024-11-04 10:27:01.193762] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:26:55.572 [2024-11-04 10:27:01.193788] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:26:59.778 [2024-11-04 10:27:04.642182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.778 [2024-11-04 10:27:04.642254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:26:59.778 [2024-11-04 10:27:04.642268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3448.406 ms 00:26:59.778 [2024-11-04 10:27:04.642279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.778 [2024-11-04 10:27:04.668623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.778 [2024-11-04 10:27:04.668681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:26:59.778 [2024-11-04 10:27:04.668694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.122 ms 00:26:59.778 [2024-11-04 10:27:04.668704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.778 [2024-11-04 10:27:04.668811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.778 [2024-11-04 10:27:04.668826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:26:59.778 [2024-11-04 10:27:04.668851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:26:59.778 [2024-11-04 10:27:04.668868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.778 [2024-11-04 10:27:04.699851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.699903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:26:59.779 [2024-11-04 10:27:04.699917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.913 ms 00:26:59.779 [2024-11-04 10:27:04.699929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.699975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.699985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:26:59.779 [2024-11-04 10:27:04.699993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:26:59.779 [2024-11-04 10:27:04.700005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.700381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.700413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:26:59.779 [2024-11-04 10:27:04.700423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.316 ms 00:26:59.779 [2024-11-04 10:27:04.700432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.700479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.700490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:26:59.779 [2024-11-04 10:27:04.700498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:26:59.779 [2024-11-04 10:27:04.700509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.715060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.715227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:26:59.779 [2024-11-04 10:27:04.715246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.529 ms 00:26:59.779 [2024-11-04 10:27:04.715261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.726770] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:26:59.779 [2024-11-04 10:27:04.727657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.727686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:26:59.779 [2024-11-04 10:27:04.727701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.294 ms 00:26:59.779 [2024-11-04 10:27:04.727709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.774461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.774519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:26:59.779 [2024-11-04 10:27:04.774536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.706 ms 00:26:59.779 [2024-11-04 10:27:04.774545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.774643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.774654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:26:59.779 [2024-11-04 10:27:04.774667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.048 ms 00:26:59.779 [2024-11-04 10:27:04.774677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.799334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.799388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:26:59.779 [2024-11-04 10:27:04.799404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.582 ms 00:26:59.779 [2024-11-04 10:27:04.799413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.823765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.823825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:26:59.779 [2024-11-04 10:27:04.823840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.286 ms 00:26:59.779 [2024-11-04 10:27:04.823849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.824460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.824484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:26:59.779 [2024-11-04 10:27:04.824497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.560 ms 00:26:59.779 [2024-11-04 10:27:04.824504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.902830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.903064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:26:59.779 [2024-11-04 10:27:04.903092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 78.270 ms 00:26:59.779 [2024-11-04 10:27:04.903101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.929341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.929408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:26:59.779 [2024-11-04 10:27:04.929436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.901 ms 00:26:59.779 [2024-11-04 10:27:04.929445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.954879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.954939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:26:59.779 [2024-11-04 10:27:04.954955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.366 ms 00:26:59.779 [2024-11-04 10:27:04.954962] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.980416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.980474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:26:59.779 [2024-11-04 10:27:04.980488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.388 ms 00:26:59.779 [2024-11-04 10:27:04.980496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.980555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.980565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:26:59.779 [2024-11-04 10:27:04.980579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:26:59.779 [2024-11-04 10:27:04.980586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.980697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:26:59.779 [2024-11-04 10:27:04.980707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:26:59.779 [2024-11-04 10:27:04.980717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:26:59.779 [2024-11-04 10:27:04.980724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:26:59.779 [2024-11-04 10:27:04.981716] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3800.456 ms, result 0 00:26:59.779 { 00:26:59.779 "name": "ftl", 00:26:59.779 "uuid": "7ca18fac-4210-400c-abce-904d8579156b" 00:26:59.779 } 00:26:59.779 10:27:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:26:59.779 [2024-11-04 10:27:05.209033] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.779 10:27:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:26:59.779 10:27:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:00.037 [2024-11-04 10:27:05.649493] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:00.037 10:27:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:00.297 [2024-11-04 10:27:05.858081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:00.297 10:27:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:00.574 Fill FTL, iteration 1 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:00.574 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79604 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79604 /var/tmp/spdk.tgt.sock 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79604 ']' 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:00.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:00.575 10:27:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:00.575 [2024-11-04 10:27:06.310418] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:00.575 [2024-11-04 10:27:06.310745] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79604 ] 00:27:00.833 [2024-11-04 10:27:06.470700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.833 [2024-11-04 10:27:06.573191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.767 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:01.767 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:01.767 10:27:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:01.767 ftln1 00:27:01.767 10:27:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:01.767 10:27:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79604 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79604 ']' 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79604 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79604 00:27:02.025 killing process with pid 79604 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79604' 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79604 00:27:02.025 10:27:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79604 00:27:03.922 10:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:03.922 10:27:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:03.922 [2024-11-04 10:27:09.359564] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:03.922 [2024-11-04 10:27:09.359696] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79646 ] 00:27:03.922 [2024-11-04 10:27:09.519691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:04.179 [2024-11-04 10:27:09.672311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.553  [2024-11-04T10:27:12.230Z] Copying: 217/1024 [MB] (217 MBps) [2024-11-04T10:27:13.164Z] Copying: 440/1024 [MB] (223 MBps) [2024-11-04T10:27:14.099Z] Copying: 665/1024 [MB] (225 MBps) [2024-11-04T10:27:14.664Z] Copying: 890/1024 [MB] (225 MBps) [2024-11-04T10:27:15.598Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:27:09.853 00:27:09.853 Calculate MD5 checksum, iteration 1 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:09.853 10:27:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:09.853 [2024-11-04 10:27:15.459750] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:09.853 [2024-11-04 10:27:15.459877] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79711 ] 00:27:10.111 [2024-11-04 10:27:15.617453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.111 [2024-11-04 10:27:15.717621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.483  [2024-11-04T10:27:17.793Z] Copying: 681/1024 [MB] (681 MBps) [2024-11-04T10:27:18.360Z] Copying: 1024/1024 [MB] (average 675 MBps) 00:27:12.615 00:27:12.615 10:27:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:12.615 10:27:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:15.155 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:15.156 Fill FTL, iteration 2 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=44ec8bb1df94157da7d0c0227b93c711 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:15.156 10:27:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:15.156 [2024-11-04 10:27:20.390973] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:15.156 [2024-11-04 10:27:20.392333] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79765 ] 00:27:15.156 [2024-11-04 10:27:20.559948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:15.156 [2024-11-04 10:27:20.662265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:16.528  [2024-11-04T10:27:23.206Z] Copying: 216/1024 [MB] (216 MBps) [2024-11-04T10:27:24.139Z] Copying: 419/1024 [MB] (203 MBps) [2024-11-04T10:27:25.073Z] Copying: 636/1024 [MB] (217 MBps) [2024-11-04T10:27:26.007Z] Copying: 871/1024 [MB] (235 MBps) [2024-11-04T10:27:26.263Z] Copying: 1024/1024 [MB] (average 220 MBps) 00:27:20.519 00:27:20.776 Calculate MD5 checksum, iteration 2 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:20.776 10:27:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:20.776 [2024-11-04 10:27:26.356143] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:20.776 [2024-11-04 10:27:26.356325] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79829 ] 00:27:21.033 [2024-11-04 10:27:26.531292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:21.033 [2024-11-04 10:27:26.617360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.407  [2024-11-04T10:27:28.716Z] Copying: 665/1024 [MB] (665 MBps) [2024-11-04T10:27:29.656Z] Copying: 1024/1024 [MB] (average 660 MBps) 00:27:23.911 00:27:23.911 10:27:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:23.911 10:27:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=cfd7d384e492b15c951d5892e91387fd 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:26.438 [2024-11-04 10:27:31.897847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.438 [2024-11-04 10:27:31.898081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:26.438 [2024-11-04 10:27:31.898150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:26.438 [2024-11-04 10:27:31.898175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.438 [2024-11-04 10:27:31.898218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.438 [2024-11-04 10:27:31.898240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:26.438 [2024-11-04 10:27:31.898259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:26.438 [2024-11-04 10:27:31.898279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.438 [2024-11-04 10:27:31.898355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.438 [2024-11-04 10:27:31.898379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:26.438 [2024-11-04 10:27:31.898398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:26.438 [2024-11-04 10:27:31.898417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.438 [2024-11-04 10:27:31.898494] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.630 ms, result 0 00:27:26.438 true 00:27:26.438 10:27:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:26.438 { 00:27:26.438 "name": "ftl", 00:27:26.438 "properties": [ 00:27:26.438 { 00:27:26.438 "name": "superblock_version", 00:27:26.438 "value": 5, 00:27:26.438 "read-only": true 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "name": "base_device", 00:27:26.438 "bands": [ 00:27:26.438 { 00:27:26.438 "id": 0, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 1, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 2, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 3, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 4, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 5, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 6, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 7, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 8, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 9, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 10, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 11, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 12, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 13, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 14, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 15, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 16, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 17, 00:27:26.438 "state": "FREE", 00:27:26.438 "validity": 0.0 00:27:26.438 } 00:27:26.438 ], 00:27:26.438 "read-only": true 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "name": "cache_device", 00:27:26.438 "type": "bdev", 00:27:26.438 "chunks": [ 00:27:26.438 { 00:27:26.438 "id": 0, 00:27:26.438 "state": "INACTIVE", 00:27:26.438 "utilization": 0.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 1, 00:27:26.438 "state": "CLOSED", 00:27:26.438 "utilization": 1.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 2, 00:27:26.438 "state": "CLOSED", 00:27:26.438 "utilization": 1.0 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 3, 00:27:26.438 "state": "OPEN", 00:27:26.438 "utilization": 0.001953125 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "id": 4, 00:27:26.438 "state": "OPEN", 00:27:26.438 "utilization": 0.0 00:27:26.438 } 00:27:26.438 ], 00:27:26.438 "read-only": true 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "name": "verbose_mode", 00:27:26.438 "value": true, 00:27:26.438 "unit": "", 00:27:26.438 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:26.438 }, 00:27:26.438 { 00:27:26.438 "name": "prep_upgrade_on_shutdown", 00:27:26.438 "value": false, 00:27:26.438 "unit": "", 00:27:26.438 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:26.438 } 00:27:26.438 ] 00:27:26.438 } 00:27:26.438 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:26.696 [2024-11-04 10:27:32.274166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.696 [2024-11-04 10:27:32.274332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:26.696 [2024-11-04 10:27:32.274380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:26.696 [2024-11-04 10:27:32.274398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.696 [2024-11-04 10:27:32.274431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.696 [2024-11-04 10:27:32.274449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:26.697 [2024-11-04 10:27:32.274464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:26.697 [2024-11-04 10:27:32.274478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.697 [2024-11-04 10:27:32.274502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:26.697 [2024-11-04 10:27:32.274518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:26.697 [2024-11-04 10:27:32.274533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:26.697 [2024-11-04 10:27:32.274577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:26.697 [2024-11-04 10:27:32.274640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.462 ms, result 0 00:27:26.697 true 00:27:26.697 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:26.697 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:26.697 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:26.955 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:26.955 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:26.955 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:27.213 [2024-11-04 10:27:32.730504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.213 [2024-11-04 10:27:32.730551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:27.213 [2024-11-04 10:27:32.730561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:27.213 [2024-11-04 10:27:32.730567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.213 [2024-11-04 10:27:32.730585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.213 [2024-11-04 10:27:32.730592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:27.213 [2024-11-04 10:27:32.730598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.213 [2024-11-04 10:27:32.730604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.213 [2024-11-04 10:27:32.730619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:27.213 [2024-11-04 10:27:32.730625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:27.213 [2024-11-04 10:27:32.730631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:27.213 [2024-11-04 10:27:32.730636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:27.213 [2024-11-04 10:27:32.730682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.171 ms, result 0 00:27:27.213 true 00:27:27.213 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:27.213 { 00:27:27.213 "name": "ftl", 00:27:27.213 "properties": [ 00:27:27.213 { 00:27:27.213 "name": "superblock_version", 00:27:27.213 "value": 5, 00:27:27.213 "read-only": true 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "name": "base_device", 00:27:27.213 "bands": [ 00:27:27.213 { 00:27:27.213 "id": 0, 00:27:27.213 "state": "FREE", 00:27:27.213 "validity": 0.0 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "id": 1, 00:27:27.213 "state": "FREE", 00:27:27.213 "validity": 0.0 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "id": 2, 00:27:27.213 "state": "FREE", 00:27:27.213 "validity": 0.0 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "id": 3, 00:27:27.213 "state": "FREE", 00:27:27.213 "validity": 0.0 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "id": 4, 00:27:27.213 "state": "FREE", 00:27:27.213 "validity": 0.0 00:27:27.213 }, 00:27:27.213 { 00:27:27.213 "id": 5, 00:27:27.213 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 6, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 7, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 8, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 9, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 10, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 11, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 12, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 13, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 14, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 15, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 16, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 17, 00:27:27.214 "state": "FREE", 00:27:27.214 "validity": 0.0 00:27:27.214 } 00:27:27.214 ], 00:27:27.214 "read-only": true 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "name": "cache_device", 00:27:27.214 "type": "bdev", 00:27:27.214 "chunks": [ 00:27:27.214 { 00:27:27.214 "id": 0, 00:27:27.214 "state": "INACTIVE", 00:27:27.214 "utilization": 0.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 1, 00:27:27.214 "state": "CLOSED", 00:27:27.214 "utilization": 1.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 2, 00:27:27.214 "state": "CLOSED", 00:27:27.214 "utilization": 1.0 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 3, 00:27:27.214 "state": "OPEN", 00:27:27.214 "utilization": 0.001953125 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "id": 4, 00:27:27.214 "state": "OPEN", 00:27:27.214 "utilization": 0.0 00:27:27.214 } 00:27:27.214 ], 00:27:27.214 "read-only": true 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "name": "verbose_mode", 00:27:27.214 "value": true, 00:27:27.214 "unit": "", 00:27:27.214 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:27.214 }, 00:27:27.214 { 00:27:27.214 "name": "prep_upgrade_on_shutdown", 00:27:27.214 "value": true, 00:27:27.214 "unit": "", 00:27:27.214 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:27.214 } 00:27:27.214 ] 00:27:27.214 } 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79476 ]] 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79476 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79476 ']' 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79476 00:27:27.214 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79476 00:27:27.472 killing process with pid 79476 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79476' 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79476 00:27:27.472 10:27:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79476 00:27:28.038 [2024-11-04 10:27:33.525765] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:28.038 [2024-11-04 10:27:33.537125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.038 [2024-11-04 10:27:33.537175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:28.038 [2024-11-04 10:27:33.537185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:28.038 [2024-11-04 10:27:33.537192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:28.038 [2024-11-04 10:27:33.537211] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:28.038 [2024-11-04 10:27:33.539285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:28.038 [2024-11-04 10:27:33.539312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:28.038 [2024-11-04 10:27:33.539321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.062 ms 00:27:28.038 [2024-11-04 10:27:33.539328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.730972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.731029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:36.144 [2024-11-04 10:27:41.731045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8191.589 ms 00:27:36.144 [2024-11-04 10:27:41.731054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.732275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.732305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:36.144 [2024-11-04 10:27:41.732314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.203 ms 00:27:36.144 [2024-11-04 10:27:41.732322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.733452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.733580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:36.144 [2024-11-04 10:27:41.733596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.104 ms 00:27:36.144 [2024-11-04 10:27:41.733604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.743144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.743187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:36.144 [2024-11-04 10:27:41.743199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.486 ms 00:27:36.144 [2024-11-04 10:27:41.743207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.749815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.749860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:36.144 [2024-11-04 10:27:41.749872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.567 ms 00:27:36.144 [2024-11-04 10:27:41.749880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.749970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.749981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:36.144 [2024-11-04 10:27:41.749990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:36.144 [2024-11-04 10:27:41.749997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.759733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.759795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:36.144 [2024-11-04 10:27:41.759808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.710 ms 00:27:36.144 [2024-11-04 10:27:41.759815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.769249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.769303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:36.144 [2024-11-04 10:27:41.769315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.395 ms 00:27:36.144 [2024-11-04 10:27:41.769323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.778251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.778314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:36.144 [2024-11-04 10:27:41.778326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.883 ms 00:27:36.144 [2024-11-04 10:27:41.778333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.787565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.787619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:36.144 [2024-11-04 10:27:41.787629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.136 ms 00:27:36.144 [2024-11-04 10:27:41.787637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.144 [2024-11-04 10:27:41.787672] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:36.144 [2024-11-04 10:27:41.787688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.144 [2024-11-04 10:27:41.787699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:36.144 [2024-11-04 10:27:41.787720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:36.144 [2024-11-04 10:27:41.787728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:36.144 [2024-11-04 10:27:41.787875] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:36.144 [2024-11-04 10:27:41.787883] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7ca18fac-4210-400c-abce-904d8579156b 00:27:36.144 [2024-11-04 10:27:41.787891] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:36.144 [2024-11-04 10:27:41.787898] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:36.144 [2024-11-04 10:27:41.787906] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:36.144 [2024-11-04 10:27:41.787914] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:36.144 [2024-11-04 10:27:41.787921] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:36.144 [2024-11-04 10:27:41.787929] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:36.144 [2024-11-04 10:27:41.787936] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:36.144 [2024-11-04 10:27:41.787943] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:36.144 [2024-11-04 10:27:41.787949] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:36.144 [2024-11-04 10:27:41.787956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.144 [2024-11-04 10:27:41.787967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:36.144 [2024-11-04 10:27:41.787978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.285 ms 00:27:36.145 [2024-11-04 10:27:41.787985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.800442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.145 [2024-11-04 10:27:41.800494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:36.145 [2024-11-04 10:27:41.800506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.438 ms 00:27:36.145 [2024-11-04 10:27:41.800514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.800917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.145 [2024-11-04 10:27:41.800928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:36.145 [2024-11-04 10:27:41.800936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.375 ms 00:27:36.145 [2024-11-04 10:27:41.800943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.842319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.145 [2024-11-04 10:27:41.842375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:36.145 [2024-11-04 10:27:41.842388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.145 [2024-11-04 10:27:41.842396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.842442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.145 [2024-11-04 10:27:41.842450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:36.145 [2024-11-04 10:27:41.842458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.145 [2024-11-04 10:27:41.842465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.842558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.145 [2024-11-04 10:27:41.842568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:36.145 [2024-11-04 10:27:41.842575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.145 [2024-11-04 10:27:41.842583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.145 [2024-11-04 10:27:41.842599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.145 [2024-11-04 10:27:41.842610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:36.145 [2024-11-04 10:27:41.842617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.145 [2024-11-04 10:27:41.842624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.919720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.919776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:36.402 [2024-11-04 10:27:41.919817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.402 [2024-11-04 10:27:41.919825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.982476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.982705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:36.402 [2024-11-04 10:27:41.982722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.402 [2024-11-04 10:27:41.982730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.982830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.982841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:36.402 [2024-11-04 10:27:41.982849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.402 [2024-11-04 10:27:41.982857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.982921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.982932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:36.402 [2024-11-04 10:27:41.982942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.402 [2024-11-04 10:27:41.982949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.983041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.983050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:36.402 [2024-11-04 10:27:41.983058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.402 [2024-11-04 10:27:41.983065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.402 [2024-11-04 10:27:41.983102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.402 [2024-11-04 10:27:41.983111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:36.402 [2024-11-04 10:27:41.983119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.403 [2024-11-04 10:27:41.983128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-04 10:27:41.983163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.403 [2024-11-04 10:27:41.983172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:36.403 [2024-11-04 10:27:41.983179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.403 [2024-11-04 10:27:41.983187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-04 10:27:41.983226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:36.403 [2024-11-04 10:27:41.983241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:36.403 [2024-11-04 10:27:41.983251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:36.403 [2024-11-04 10:27:41.983259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.403 [2024-11-04 10:27:41.983368] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8446.191 ms, result 0 00:27:40.623 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:40.623 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:40.623 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:40.623 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:40.623 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:40.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80026 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80026 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80026 ']' 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:40.624 10:27:46 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:40.624 [2024-11-04 10:27:46.364982] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:40.624 [2024-11-04 10:27:46.365088] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80026 ] 00:27:40.881 [2024-11-04 10:27:46.517252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:40.881 [2024-11-04 10:27:46.623148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.817 [2024-11-04 10:27:47.320821] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:41.817 [2024-11-04 10:27:47.320885] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:41.817 [2024-11-04 10:27:47.465431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.465508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:41.817 [2024-11-04 10:27:47.465530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:41.817 [2024-11-04 10:27:47.465544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.465625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.465641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:41.817 [2024-11-04 10:27:47.465656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:27:41.817 [2024-11-04 10:27:47.465668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.465707] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:41.817 [2024-11-04 10:27:47.466916] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:41.817 [2024-11-04 10:27:47.466959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.466974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:41.817 [2024-11-04 10:27:47.466988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.263 ms 00:27:41.817 [2024-11-04 10:27:47.467001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.468505] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:41.817 [2024-11-04 10:27:47.488666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.488752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:41.817 [2024-11-04 10:27:47.488773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.159 ms 00:27:41.817 [2024-11-04 10:27:47.488824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.488937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.488954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:41.817 [2024-11-04 10:27:47.488968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:41.817 [2024-11-04 10:27:47.488980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.494828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.495106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:41.817 [2024-11-04 10:27:47.495138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.703 ms 00:27:41.817 [2024-11-04 10:27:47.495153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.495262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.495279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:41.817 [2024-11-04 10:27:47.495294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.059 ms 00:27:41.817 [2024-11-04 10:27:47.495306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.495385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.495403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:41.817 [2024-11-04 10:27:47.495417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:27:41.817 [2024-11-04 10:27:47.495434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.495472] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:41.817 [2024-11-04 10:27:47.500684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.500741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:41.817 [2024-11-04 10:27:47.500758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.220 ms 00:27:41.817 [2024-11-04 10:27:47.500771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.500852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.500868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:41.817 [2024-11-04 10:27:47.500882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:41.817 [2024-11-04 10:27:47.500895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.500987] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:41.817 [2024-11-04 10:27:47.501018] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:41.817 [2024-11-04 10:27:47.501071] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:41.817 [2024-11-04 10:27:47.501096] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:41.817 [2024-11-04 10:27:47.501238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:41.817 [2024-11-04 10:27:47.501263] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:41.817 [2024-11-04 10:27:47.501280] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:41.817 [2024-11-04 10:27:47.501296] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:41.817 [2024-11-04 10:27:47.501312] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:41.817 [2024-11-04 10:27:47.501325] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:41.817 [2024-11-04 10:27:47.501340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:41.817 [2024-11-04 10:27:47.501353] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:41.817 [2024-11-04 10:27:47.501365] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:41.817 [2024-11-04 10:27:47.501379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.501392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:41.817 [2024-11-04 10:27:47.501407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.394 ms 00:27:41.817 [2024-11-04 10:27:47.501420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.501548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.817 [2024-11-04 10:27:47.501562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:41.817 [2024-11-04 10:27:47.501574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.095 ms 00:27:41.817 [2024-11-04 10:27:47.501589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.817 [2024-11-04 10:27:47.501728] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:41.817 [2024-11-04 10:27:47.501746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:41.817 [2024-11-04 10:27:47.501761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:41.817 [2024-11-04 10:27:47.501775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:41.817 [2024-11-04 10:27:47.501818] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:41.817 [2024-11-04 10:27:47.501842] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:41.817 [2024-11-04 10:27:47.501855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:41.817 [2024-11-04 10:27:47.501867] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:41.817 [2024-11-04 10:27:47.501891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:41.817 [2024-11-04 10:27:47.501903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501915] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:41.817 [2024-11-04 10:27:47.501927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:41.817 [2024-11-04 10:27:47.501937] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:41.817 [2024-11-04 10:27:47.501959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:41.817 [2024-11-04 10:27:47.501971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.817 [2024-11-04 10:27:47.501983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:41.817 [2024-11-04 10:27:47.501995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:41.817 [2024-11-04 10:27:47.502006] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.817 [2024-11-04 10:27:47.502017] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:41.817 [2024-11-04 10:27:47.502028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:41.817 [2024-11-04 10:27:47.502039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.817 [2024-11-04 10:27:47.502060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:41.817 [2024-11-04 10:27:47.502075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:41.817 [2024-11-04 10:27:47.502086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.817 [2024-11-04 10:27:47.502098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:41.817 [2024-11-04 10:27:47.502109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:41.817 [2024-11-04 10:27:47.502121] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:41.817 [2024-11-04 10:27:47.502132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:41.818 [2024-11-04 10:27:47.502143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:41.818 [2024-11-04 10:27:47.502154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:41.818 [2024-11-04 10:27:47.502177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:41.818 [2024-11-04 10:27:47.502188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502200] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:41.818 [2024-11-04 10:27:47.502211] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:41.818 [2024-11-04 10:27:47.502245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:41.818 [2024-11-04 10:27:47.502255] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502266] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:41.818 [2024-11-04 10:27:47.502278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:41.818 [2024-11-04 10:27:47.502291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:41.818 [2024-11-04 10:27:47.502302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:41.818 [2024-11-04 10:27:47.502315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:41.818 [2024-11-04 10:27:47.502327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:41.818 [2024-11-04 10:27:47.502337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:41.818 [2024-11-04 10:27:47.502348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:41.818 [2024-11-04 10:27:47.502359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:41.818 [2024-11-04 10:27:47.502371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:41.818 [2024-11-04 10:27:47.502384] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:41.818 [2024-11-04 10:27:47.502403] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:41.818 [2024-11-04 10:27:47.502431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:41.818 [2024-11-04 10:27:47.502470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:41.818 [2024-11-04 10:27:47.502485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:41.818 [2024-11-04 10:27:47.502498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:41.818 [2024-11-04 10:27:47.502511] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502525] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:41.818 [2024-11-04 10:27:47.502600] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:41.818 [2024-11-04 10:27:47.502614] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502629] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:41.818 [2024-11-04 10:27:47.502642] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:41.818 [2024-11-04 10:27:47.502655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:41.818 [2024-11-04 10:27:47.502667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:41.818 [2024-11-04 10:27:47.502681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:41.818 [2024-11-04 10:27:47.502694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:41.818 [2024-11-04 10:27:47.502708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.041 ms 00:27:41.818 [2024-11-04 10:27:47.502720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:41.818 [2024-11-04 10:27:47.503121] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:41.818 [2024-11-04 10:27:47.503199] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:44.380 [2024-11-04 10:27:49.777763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.380 [2024-11-04 10:27:49.778005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:44.380 [2024-11-04 10:27:49.778105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2274.634 ms 00:27:44.380 [2024-11-04 10:27:49.778131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.380 [2024-11-04 10:27:49.806816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.380 [2024-11-04 10:27:49.807014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:44.380 [2024-11-04 10:27:49.807073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.446 ms 00:27:44.380 [2024-11-04 10:27:49.807097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.380 [2024-11-04 10:27:49.807223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.380 [2024-11-04 10:27:49.807255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:44.380 [2024-11-04 10:27:49.807282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:44.380 [2024-11-04 10:27:49.807302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.380 [2024-11-04 10:27:49.839991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.380 [2024-11-04 10:27:49.840208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:44.380 [2024-11-04 10:27:49.840264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.625 ms 00:27:44.380 [2024-11-04 10:27:49.840286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.380 [2024-11-04 10:27:49.840365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.380 [2024-11-04 10:27:49.840388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:44.380 [2024-11-04 10:27:49.840417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:44.380 [2024-11-04 10:27:49.840437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.840928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.840974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:44.381 [2024-11-04 10:27:49.840995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:27:44.381 [2024-11-04 10:27:49.841014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.841140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.841167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:44.381 [2024-11-04 10:27:49.841187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:44.381 [2024-11-04 10:27:49.841206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.857183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.857319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:44.381 [2024-11-04 10:27:49.857335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.907 ms 00:27:44.381 [2024-11-04 10:27:49.857344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.870680] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:44.381 [2024-11-04 10:27:49.870716] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:44.381 [2024-11-04 10:27:49.870729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.870738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:44.381 [2024-11-04 10:27:49.870747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.231 ms 00:27:44.381 [2024-11-04 10:27:49.870755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.884271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.884394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:44.381 [2024-11-04 10:27:49.884422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.447 ms 00:27:44.381 [2024-11-04 10:27:49.884432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.895522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.895628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:44.381 [2024-11-04 10:27:49.895642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.054 ms 00:27:44.381 [2024-11-04 10:27:49.895650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.906619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.906725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:44.381 [2024-11-04 10:27:49.906739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.939 ms 00:27:44.381 [2024-11-04 10:27:49.906746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.907375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.907404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:44.381 [2024-11-04 10:27:49.907414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:27:44.381 [2024-11-04 10:27:49.907422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.980458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.980690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:44.381 [2024-11-04 10:27:49.980712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.012 ms 00:27:44.381 [2024-11-04 10:27:49.980721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.992194] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:44.381 [2024-11-04 10:27:49.993161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.993190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:44.381 [2024-11-04 10:27:49.993203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.358 ms 00:27:44.381 [2024-11-04 10:27:49.993211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.993322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.993334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:44.381 [2024-11-04 10:27:49.993346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:44.381 [2024-11-04 10:27:49.993354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.993414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.993425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:44.381 [2024-11-04 10:27:49.993434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:44.381 [2024-11-04 10:27:49.993441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.993464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.993473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:44.381 [2024-11-04 10:27:49.993481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:44.381 [2024-11-04 10:27:49.993491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:49.993527] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:44.381 [2024-11-04 10:27:49.993539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:49.993547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:44.381 [2024-11-04 10:27:49.993555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:44.381 [2024-11-04 10:27:49.993563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:50.016676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:50.016714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:44.381 [2024-11-04 10:27:50.016731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.093 ms 00:27:44.381 [2024-11-04 10:27:50.016740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:50.016832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:44.381 [2024-11-04 10:27:50.016844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:44.381 [2024-11-04 10:27:50.016853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:44.381 [2024-11-04 10:27:50.016862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:44.381 [2024-11-04 10:27:50.018251] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2552.346 ms, result 0 00:27:44.381 [2024-11-04 10:27:50.033061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.381 [2024-11-04 10:27:50.049055] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:44.381 [2024-11-04 10:27:50.057214] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:44.944 10:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:44.944 10:27:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:44.944 10:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:44.944 10:27:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:44.944 10:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:45.201 [2024-11-04 10:27:50.829947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.201 [2024-11-04 10:27:50.830014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:45.201 [2024-11-04 10:27:50.830029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:45.201 [2024-11-04 10:27:50.830037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.201 [2024-11-04 10:27:50.830064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.201 [2024-11-04 10:27:50.830074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:45.202 [2024-11-04 10:27:50.830083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:45.202 [2024-11-04 10:27:50.830090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.202 [2024-11-04 10:27:50.830111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:45.202 [2024-11-04 10:27:50.830119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:45.202 [2024-11-04 10:27:50.830127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:45.202 [2024-11-04 10:27:50.830135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:45.202 [2024-11-04 10:27:50.830199] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.248 ms, result 0 00:27:45.202 true 00:27:45.202 10:27:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.459 { 00:27:45.459 "name": "ftl", 00:27:45.459 "properties": [ 00:27:45.459 { 00:27:45.459 "name": "superblock_version", 00:27:45.459 "value": 5, 00:27:45.459 "read-only": true 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "name": "base_device", 00:27:45.459 "bands": [ 00:27:45.459 { 00:27:45.459 "id": 0, 00:27:45.459 "state": "CLOSED", 00:27:45.459 "validity": 1.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 1, 00:27:45.459 "state": "CLOSED", 00:27:45.459 "validity": 1.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 2, 00:27:45.459 "state": "CLOSED", 00:27:45.459 "validity": 0.007843137254901933 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 3, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 4, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 5, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 6, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 7, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 8, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 9, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 10, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 11, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 12, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 13, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 14, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 15, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 16, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.459 }, 00:27:45.459 { 00:27:45.459 "id": 17, 00:27:45.459 "state": "FREE", 00:27:45.459 "validity": 0.0 00:27:45.460 } 00:27:45.460 ], 00:27:45.460 "read-only": true 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "name": "cache_device", 00:27:45.460 "type": "bdev", 00:27:45.460 "chunks": [ 00:27:45.460 { 00:27:45.460 "id": 0, 00:27:45.460 "state": "INACTIVE", 00:27:45.460 "utilization": 0.0 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "id": 1, 00:27:45.460 "state": "OPEN", 00:27:45.460 "utilization": 0.0 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "id": 2, 00:27:45.460 "state": "OPEN", 00:27:45.460 "utilization": 0.0 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "id": 3, 00:27:45.460 "state": "FREE", 00:27:45.460 "utilization": 0.0 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "id": 4, 00:27:45.460 "state": "FREE", 00:27:45.460 "utilization": 0.0 00:27:45.460 } 00:27:45.460 ], 00:27:45.460 "read-only": true 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "name": "verbose_mode", 00:27:45.460 "value": true, 00:27:45.460 "unit": "", 00:27:45.460 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:45.460 }, 00:27:45.460 { 00:27:45.460 "name": "prep_upgrade_on_shutdown", 00:27:45.460 "value": false, 00:27:45.460 "unit": "", 00:27:45.460 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:45.460 } 00:27:45.460 ] 00:27:45.460 } 00:27:45.460 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:45.460 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:45.460 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.717 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:45.717 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:45.717 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:45.717 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:45.717 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:45.975 Validate MD5 checksum, iteration 1 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:45.975 10:27:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:45.975 [2024-11-04 10:27:51.543465] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:45.975 [2024-11-04 10:27:51.543735] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80095 ] 00:27:45.975 [2024-11-04 10:27:51.703968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.233 [2024-11-04 10:27:51.804736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:47.608  [2024-11-04T10:27:54.305Z] Copying: 617/1024 [MB] (617 MBps) [2024-11-04T10:27:55.240Z] Copying: 1024/1024 [MB] (average 612 MBps) 00:27:49.495 00:27:49.495 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:49.495 10:27:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=44ec8bb1df94157da7d0c0227b93c711 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 44ec8bb1df94157da7d0c0227b93c711 != \4\4\e\c\8\b\b\1\d\f\9\4\1\5\7\d\a\7\d\0\c\0\2\2\7\b\9\3\c\7\1\1 ]] 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:51.393 Validate MD5 checksum, iteration 2 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:51.393 10:27:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:51.393 [2024-11-04 10:27:56.860723] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:51.393 [2024-11-04 10:27:56.860872] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80163 ] 00:27:51.393 [2024-11-04 10:27:57.032864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.651 [2024-11-04 10:27:57.172665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.026  [2024-11-04T10:27:59.337Z] Copying: 658/1024 [MB] (658 MBps) [2024-11-04T10:28:00.273Z] Copying: 1024/1024 [MB] (average 642 MBps) 00:27:54.528 00:27:54.528 10:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:27:54.528 10:28:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:57.057 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:57.057 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cfd7d384e492b15c951d5892e91387fd 00:27:57.057 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cfd7d384e492b15c951d5892e91387fd != \c\f\d\7\d\3\8\4\e\4\9\2\b\1\5\c\9\5\1\d\5\8\9\2\e\9\1\3\8\7\f\d ]] 00:27:57.057 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80026 ]] 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80026 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80224 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80224 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80224 ']' 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:57.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:57.058 10:28:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:57.058 [2024-11-04 10:28:02.490016] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:57.058 [2024-11-04 10:28:02.490498] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80224 ] 00:27:57.058 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 80026 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:27:57.058 [2024-11-04 10:28:02.651095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.058 [2024-11-04 10:28:02.750604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.991 [2024-11-04 10:28:03.448911] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.991 [2024-11-04 10:28:03.449136] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:57.991 [2024-11-04 10:28:03.598194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.991 [2024-11-04 10:28:03.598408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:57.991 [2024-11-04 10:28:03.598480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.991 [2024-11-04 10:28:03.598505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.991 [2024-11-04 10:28:03.598608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.991 [2024-11-04 10:28:03.598687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:57.991 [2024-11-04 10:28:03.598737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:27:57.991 [2024-11-04 10:28:03.598759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.991 [2024-11-04 10:28:03.598813] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:57.991 [2024-11-04 10:28:03.599648] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:57.991 [2024-11-04 10:28:03.599741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.991 [2024-11-04 10:28:03.599752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:57.991 [2024-11-04 10:28:03.599760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.938 ms 00:27:57.992 [2024-11-04 10:28:03.599769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.600080] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:57.992 [2024-11-04 10:28:03.615356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.615395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:57.992 [2024-11-04 10:28:03.615408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.276 ms 00:27:57.992 [2024-11-04 10:28:03.615417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.624714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.624865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:57.992 [2024-11-04 10:28:03.624885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:57.992 [2024-11-04 10:28:03.624893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.625210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.625227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:57.992 [2024-11-04 10:28:03.625236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.236 ms 00:27:57.992 [2024-11-04 10:28:03.625244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.625292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.625303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:57.992 [2024-11-04 10:28:03.625311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.032 ms 00:27:57.992 [2024-11-04 10:28:03.625318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.625344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.625353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:57.992 [2024-11-04 10:28:03.625360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:57.992 [2024-11-04 10:28:03.625367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.625388] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:57.992 [2024-11-04 10:28:03.628403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.628522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:57.992 [2024-11-04 10:28:03.628536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.020 ms 00:27:57.992 [2024-11-04 10:28:03.628545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.628574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.628587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:57.992 [2024-11-04 10:28:03.628595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.992 [2024-11-04 10:28:03.628601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.628622] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:57.992 [2024-11-04 10:28:03.628639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:57.992 [2024-11-04 10:28:03.628673] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:57.992 [2024-11-04 10:28:03.628687] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:57.992 [2024-11-04 10:28:03.628813] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:57.992 [2024-11-04 10:28:03.628825] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:57.992 [2024-11-04 10:28:03.628835] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:57.992 [2024-11-04 10:28:03.628845] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:57.992 [2024-11-04 10:28:03.628854] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:57.992 [2024-11-04 10:28:03.628862] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:57.992 [2024-11-04 10:28:03.628869] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:57.992 [2024-11-04 10:28:03.628876] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:57.992 [2024-11-04 10:28:03.628883] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:57.992 [2024-11-04 10:28:03.628890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.628897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:57.992 [2024-11-04 10:28:03.628907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.270 ms 00:27:57.992 [2024-11-04 10:28:03.628914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.628998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.992 [2024-11-04 10:28:03.629006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:57.992 [2024-11-04 10:28:03.629013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.068 ms 00:27:57.992 [2024-11-04 10:28:03.629020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.992 [2024-11-04 10:28:03.629132] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:57.992 [2024-11-04 10:28:03.629142] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:57.992 [2024-11-04 10:28:03.629150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.992 [2024-11-04 10:28:03.629160] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.992 [2024-11-04 10:28:03.629167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:57.992 [2024-11-04 10:28:03.629174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:57.992 [2024-11-04 10:28:03.629181] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:57.992 [2024-11-04 10:28:03.629188] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:57.992 [2024-11-04 10:28:03.629195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:57.992 [2024-11-04 10:28:03.629201] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.992 [2024-11-04 10:28:03.629208] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:57.992 [2024-11-04 10:28:03.629214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:57.992 [2024-11-04 10:28:03.629221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.992 [2024-11-04 10:28:03.629228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:57.992 [2024-11-04 10:28:03.629234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:57.993 [2024-11-04 10:28:03.629240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:57.993 [2024-11-04 10:28:03.629252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:57.993 [2024-11-04 10:28:03.629258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:57.993 [2024-11-04 10:28:03.629271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:57.993 [2024-11-04 10:28:03.629295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629310] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:57.993 [2024-11-04 10:28:03.629316] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:57.993 [2024-11-04 10:28:03.629335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:57.993 [2024-11-04 10:28:03.629353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629366] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:57.993 [2024-11-04 10:28:03.629373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:57.993 [2024-11-04 10:28:03.629391] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629398] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629404] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:57.993 [2024-11-04 10:28:03.629410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:57.993 [2024-11-04 10:28:03.629416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629423] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:57.993 [2024-11-04 10:28:03.629431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:57.993 [2024-11-04 10:28:03.629438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:57.993 [2024-11-04 10:28:03.629452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:57.993 [2024-11-04 10:28:03.629458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:57.993 [2024-11-04 10:28:03.629465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:57.993 [2024-11-04 10:28:03.629471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:57.993 [2024-11-04 10:28:03.629477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:57.993 [2024-11-04 10:28:03.629483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:57.993 [2024-11-04 10:28:03.629491] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:57.993 [2024-11-04 10:28:03.629500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629509] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:57.993 [2024-11-04 10:28:03.629516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:57.993 [2024-11-04 10:28:03.629538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:57.993 [2024-11-04 10:28:03.629545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:57.993 [2024-11-04 10:28:03.629552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:57.993 [2024-11-04 10:28:03.629559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629566] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629573] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:57.993 [2024-11-04 10:28:03.629607] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:57.993 [2024-11-04 10:28:03.629615] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629623] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:57.993 [2024-11-04 10:28:03.629630] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:57.993 [2024-11-04 10:28:03.629637] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:57.993 [2024-11-04 10:28:03.629644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:57.993 [2024-11-04 10:28:03.629652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.993 [2024-11-04 10:28:03.629661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:57.994 [2024-11-04 10:28:03.629668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.588 ms 00:27:57.994 [2024-11-04 10:28:03.629675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.653551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.653678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:57.994 [2024-11-04 10:28:03.653730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.827 ms 00:27:57.994 [2024-11-04 10:28:03.653753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.653818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.653899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:57.994 [2024-11-04 10:28:03.653927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:57.994 [2024-11-04 10:28:03.653946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.684202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.684356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:57.994 [2024-11-04 10:28:03.684413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.186 ms 00:27:57.994 [2024-11-04 10:28:03.684437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.684495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.684515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:57.994 [2024-11-04 10:28:03.684535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:57.994 [2024-11-04 10:28:03.684553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.684674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.684722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:57.994 [2024-11-04 10:28:03.684759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:57.994 [2024-11-04 10:28:03.684778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.684843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.684864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:57.994 [2024-11-04 10:28:03.684884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:27:57.994 [2024-11-04 10:28:03.684908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.698670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.698808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:57.994 [2024-11-04 10:28:03.698867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.731 ms 00:27:57.994 [2024-11-04 10:28:03.698890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.699032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.699066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:27:57.994 [2024-11-04 10:28:03.699087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:57.994 [2024-11-04 10:28:03.699140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:57.994 [2024-11-04 10:28:03.729203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:57.994 [2024-11-04 10:28:03.729378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:27:57.994 [2024-11-04 10:28:03.729438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.006 ms 00:27:57.994 [2024-11-04 10:28:03.729462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.739160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.739263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:58.301 [2024-11-04 10:28:03.739312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.539 ms 00:27:58.301 [2024-11-04 10:28:03.739343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.793068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.793267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:58.301 [2024-11-04 10:28:03.793325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 53.646 ms 00:27:58.301 [2024-11-04 10:28:03.793349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.793490] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:27:58.301 [2024-11-04 10:28:03.793738] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:27:58.301 [2024-11-04 10:28:03.793863] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:27:58.301 [2024-11-04 10:28:03.794094] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:27:58.301 [2024-11-04 10:28:03.794124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.794144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:27:58.301 [2024-11-04 10:28:03.794163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:27:58.301 [2024-11-04 10:28:03.794182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.794297] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:27:58.301 [2024-11-04 10:28:03.794335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.794355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:27:58.301 [2024-11-04 10:28:03.794379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:27:58.301 [2024-11-04 10:28:03.794397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.808795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.808920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:27:58.301 [2024-11-04 10:28:03.808981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.326 ms 00:27:58.301 [2024-11-04 10:28:03.809004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.817662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.817761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:27:58.301 [2024-11-04 10:28:03.817824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:58.301 [2024-11-04 10:28:03.817848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.301 [2024-11-04 10:28:03.817955] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:27:58.301 [2024-11-04 10:28:03.818103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.301 [2024-11-04 10:28:03.818179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:58.301 [2024-11-04 10:28:03.818204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.149 ms 00:27:58.301 [2024-11-04 10:28:03.818223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.559 [2024-11-04 10:28:04.255143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.559 [2024-11-04 10:28:04.255342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:58.559 [2024-11-04 10:28:04.255412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 435.965 ms 00:27:58.559 [2024-11-04 10:28:04.255437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.559 [2024-11-04 10:28:04.259529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.559 [2024-11-04 10:28:04.259642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:58.559 [2024-11-04 10:28:04.259700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.001 ms 00:27:58.559 [2024-11-04 10:28:04.259723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.559 [2024-11-04 10:28:04.259994] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:27:58.559 [2024-11-04 10:28:04.260059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.559 [2024-11-04 10:28:04.260132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:58.559 [2024-11-04 10:28:04.260219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.306 ms 00:27:58.560 [2024-11-04 10:28:04.260242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.560 [2024-11-04 10:28:04.260284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.560 [2024-11-04 10:28:04.260344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:58.560 [2024-11-04 10:28:04.260366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:58.560 [2024-11-04 10:28:04.260386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:58.560 [2024-11-04 10:28:04.260482] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 442.524 ms, result 0 00:27:58.560 [2024-11-04 10:28:04.260589] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:27:58.560 [2024-11-04 10:28:04.260742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:58.560 [2024-11-04 10:28:04.260851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:27:58.560 [2024-11-04 10:28:04.260865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.153 ms 00:27:58.560 [2024-11-04 10:28:04.260873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.674778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.674961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:27:59.126 [2024-11-04 10:28:04.675028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 412.940 ms 00:27:59.126 [2024-11-04 10:28:04.675053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.678867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.678975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:27:59.126 [2024-11-04 10:28:04.679033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.818 ms 00:27:59.126 [2024-11-04 10:28:04.679055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.679322] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:27:59.126 [2024-11-04 10:28:04.679387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.679454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:27:59.126 [2024-11-04 10:28:04.679477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:27:59.126 [2024-11-04 10:28:04.679496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.679564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.679588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:27:59.126 [2024-11-04 10:28:04.679640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:59.126 [2024-11-04 10:28:04.679661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.679712] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 419.118 ms, result 0 00:27:59.126 [2024-11-04 10:28:04.679819] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:27:59.126 [2024-11-04 10:28:04.679894] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:59.126 [2024-11-04 10:28:04.679927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.679945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:27:59.126 [2024-11-04 10:28:04.679965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 861.990 ms 00:27:59.126 [2024-11-04 10:28:04.679983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.680060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.680084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:27:59.126 [2024-11-04 10:28:04.680108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:59.126 [2024-11-04 10:28:04.680127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.690797] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:59.126 [2024-11-04 10:28:04.690990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.691020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:59.126 [2024-11-04 10:28:04.691084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.835 ms 00:27:59.126 [2024-11-04 10:28:04.691106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.691840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.691918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:27:59.126 [2024-11-04 10:28:04.691966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.625 ms 00:27:59.126 [2024-11-04 10:28:04.691992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.694227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.694306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:27:59.126 [2024-11-04 10:28:04.694352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.204 ms 00:27:59.126 [2024-11-04 10:28:04.694374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.694422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.694478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:27:59.126 [2024-11-04 10:28:04.694524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:59.126 [2024-11-04 10:28:04.694543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.694661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.695007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:59.126 [2024-11-04 10:28:04.695098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:59.126 [2024-11-04 10:28:04.695124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.126 [2024-11-04 10:28:04.695193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.126 [2024-11-04 10:28:04.695220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:59.126 [2024-11-04 10:28:04.695314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:59.127 [2024-11-04 10:28:04.695336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.127 [2024-11-04 10:28:04.695400] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:59.127 [2024-11-04 10:28:04.695431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.127 [2024-11-04 10:28:04.695564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:59.127 [2024-11-04 10:28:04.695588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:59.127 [2024-11-04 10:28:04.695606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.127 [2024-11-04 10:28:04.695675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.127 [2024-11-04 10:28:04.695796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:59.127 [2024-11-04 10:28:04.695821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:27:59.127 [2024-11-04 10:28:04.695841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.127 [2024-11-04 10:28:04.696820] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1098.166 ms, result 0 00:27:59.127 [2024-11-04 10:28:04.711441] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:59.127 [2024-11-04 10:28:04.727431] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:59.127 [2024-11-04 10:28:04.735713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:59.385 Validate MD5 checksum, iteration 1 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:59.385 10:28:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:59.385 [2024-11-04 10:28:05.123630] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:27:59.385 [2024-11-04 10:28:05.123921] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80259 ] 00:27:59.642 [2024-11-04 10:28:05.283060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:59.642 [2024-11-04 10:28:05.383292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.552  [2024-11-04T10:28:07.555Z] Copying: 674/1024 [MB] (674 MBps) [2024-11-04T10:28:08.935Z] Copying: 1024/1024 [MB] (average 671 MBps) 00:28:03.190 00:28:03.190 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:03.190 10:28:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:05.105 Validate MD5 checksum, iteration 2 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=44ec8bb1df94157da7d0c0227b93c711 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 44ec8bb1df94157da7d0c0227b93c711 != \4\4\e\c\8\b\b\1\d\f\9\4\1\5\7\d\a\7\d\0\c\0\2\2\7\b\9\3\c\7\1\1 ]] 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:05.105 10:28:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:05.105 [2024-11-04 10:28:10.743077] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:28:05.105 [2024-11-04 10:28:10.743357] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80320 ] 00:28:05.366 [2024-11-04 10:28:10.906674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.366 [2024-11-04 10:28:11.006062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:06.778  [2024-11-04T10:28:13.091Z] Copying: 726/1024 [MB] (726 MBps) [2024-11-04T10:28:17.282Z] Copying: 1024/1024 [MB] (average 713 MBps) 00:28:11.537 00:28:11.537 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:11.537 10:28:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:13.448 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=cfd7d384e492b15c951d5892e91387fd 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ cfd7d384e492b15c951d5892e91387fd != \c\f\d\7\d\3\8\4\e\4\9\2\b\1\5\c\9\5\1\d\5\8\9\2\e\9\1\3\8\7\f\d ]] 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:13.449 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80224 ]] 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80224 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80224 ']' 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80224 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80224 00:28:13.710 killing process with pid 80224 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80224' 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80224 00:28:13.710 10:28:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80224 00:28:14.276 [2024-11-04 10:28:19.892620] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:14.276 [2024-11-04 10:28:19.904070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.276 [2024-11-04 10:28:19.904111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:14.276 [2024-11-04 10:28:19.904121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:14.276 [2024-11-04 10:28:19.904128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.276 [2024-11-04 10:28:19.904146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:14.276 [2024-11-04 10:28:19.906249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.276 [2024-11-04 10:28:19.906276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:14.276 [2024-11-04 10:28:19.906284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.091 ms 00:28:14.276 [2024-11-04 10:28:19.906295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.276 [2024-11-04 10:28:19.906478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.276 [2024-11-04 10:28:19.906487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:14.276 [2024-11-04 10:28:19.906495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.166 ms 00:28:14.276 [2024-11-04 10:28:19.906501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.276 [2024-11-04 10:28:19.907462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.276 [2024-11-04 10:28:19.907581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:14.276 [2024-11-04 10:28:19.907594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.949 ms 00:28:14.277 [2024-11-04 10:28:19.907601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.908524] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.908540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:14.277 [2024-11-04 10:28:19.908547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.902 ms 00:28:14.277 [2024-11-04 10:28:19.908553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.916014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.916043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:14.277 [2024-11-04 10:28:19.916051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.435 ms 00:28:14.277 [2024-11-04 10:28:19.916057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.920301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.920330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:14.277 [2024-11-04 10:28:19.920338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.223 ms 00:28:14.277 [2024-11-04 10:28:19.920345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.920411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.920427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:14.277 [2024-11-04 10:28:19.920435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:28:14.277 [2024-11-04 10:28:19.920442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.927572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.927597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:14.277 [2024-11-04 10:28:19.927604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.116 ms 00:28:14.277 [2024-11-04 10:28:19.927611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.934967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.934992] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:14.277 [2024-11-04 10:28:19.934999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.342 ms 00:28:14.277 [2024-11-04 10:28:19.935005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.942041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.942159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:14.277 [2024-11-04 10:28:19.942171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.021 ms 00:28:14.277 [2024-11-04 10:28:19.942178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.949302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.949395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:14.277 [2024-11-04 10:28:19.949407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.087 ms 00:28:14.277 [2024-11-04 10:28:19.949412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.949428] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:14.277 [2024-11-04 10:28:19.949443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.277 [2024-11-04 10:28:19.949451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:14.277 [2024-11-04 10:28:19.949457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:14.277 [2024-11-04 10:28:19.949464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:14.277 [2024-11-04 10:28:19.949555] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:14.277 [2024-11-04 10:28:19.949561] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 7ca18fac-4210-400c-abce-904d8579156b 00:28:14.277 [2024-11-04 10:28:19.949566] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:14.277 [2024-11-04 10:28:19.949572] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:14.277 [2024-11-04 10:28:19.949578] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:14.277 [2024-11-04 10:28:19.949585] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:14.277 [2024-11-04 10:28:19.949590] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:14.277 [2024-11-04 10:28:19.949596] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:14.277 [2024-11-04 10:28:19.949601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:14.277 [2024-11-04 10:28:19.949606] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:14.277 [2024-11-04 10:28:19.949611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:14.277 [2024-11-04 10:28:19.949616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.949623] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:14.277 [2024-11-04 10:28:19.949631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.189 ms 00:28:14.277 [2024-11-04 10:28:19.949637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.959035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.959061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:14.277 [2024-11-04 10:28:19.959069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.377 ms 00:28:14.277 [2024-11-04 10:28:19.959074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.959341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:14.277 [2024-11-04 10:28:19.959353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:14.277 [2024-11-04 10:28:19.959360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.252 ms 00:28:14.277 [2024-11-04 10:28:19.959365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.991987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.277 [2024-11-04 10:28:19.992106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:14.277 [2024-11-04 10:28:19.992119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.277 [2024-11-04 10:28:19.992126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.992155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.277 [2024-11-04 10:28:19.992166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:14.277 [2024-11-04 10:28:19.992172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.277 [2024-11-04 10:28:19.992179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.992256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.277 [2024-11-04 10:28:19.992264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:14.277 [2024-11-04 10:28:19.992271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.277 [2024-11-04 10:28:19.992277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.277 [2024-11-04 10:28:19.992289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.277 [2024-11-04 10:28:19.992296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:14.277 [2024-11-04 10:28:19.992304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.277 [2024-11-04 10:28:19.992310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.063195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.063239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:14.535 [2024-11-04 10:28:20.063250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.063257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:14.535 [2024-11-04 10:28:20.114231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:14.535 [2024-11-04 10:28:20.114319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:14.535 [2024-11-04 10:28:20.114384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:14.535 [2024-11-04 10:28:20.114489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:14.535 [2024-11-04 10:28:20.114533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:14.535 [2024-11-04 10:28:20.114582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:14.535 [2024-11-04 10:28:20.114627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:14.535 [2024-11-04 10:28:20.114634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:14.535 [2024-11-04 10:28:20.114643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:14.535 [2024-11-04 10:28:20.114733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 210.641 ms, result 0 00:28:15.101 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:15.101 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.101 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:15.101 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:15.102 Remove shared memory files 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80026 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:15.102 ************************************ 00:28:15.102 END TEST ftl_upgrade_shutdown 00:28:15.102 ************************************ 00:28:15.102 00:28:15.102 real 1m23.244s 00:28:15.102 user 1m55.858s 00:28:15.102 sys 0m19.149s 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:15.102 10:28:20 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@14 -- # killprocess 72259 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@952 -- # '[' -z 72259 ']' 00:28:15.102 Process with pid 72259 is not found 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@956 -- # kill -0 72259 00:28:15.102 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72259) - No such process 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72259 is not found' 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80457 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@20 -- # waitforlisten 80457 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@833 -- # '[' -z 80457 ']' 00:28:15.102 10:28:20 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.102 10:28:20 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:15.359 [2024-11-04 10:28:20.866849] Starting SPDK v25.01-pre git sha1 3f50defde / DPDK 24.03.0 initialization... 00:28:15.359 [2024-11-04 10:28:20.867122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80457 ] 00:28:15.359 [2024-11-04 10:28:21.022828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.617 [2024-11-04 10:28:21.106680] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.183 10:28:21 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:16.183 10:28:21 ftl -- common/autotest_common.sh@866 -- # return 0 00:28:16.183 10:28:21 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:16.440 nvme0n1 00:28:16.440 10:28:21 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:16.440 10:28:21 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:16.440 10:28:21 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:16.440 10:28:22 ftl -- ftl/common.sh@28 -- # stores=9365aa6c-2fa5-49dd-a2f3-d8cdaa99eb03 00:28:16.440 10:28:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:16.440 10:28:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9365aa6c-2fa5-49dd-a2f3-d8cdaa99eb03 00:28:16.697 10:28:22 ftl -- ftl/ftl.sh@23 -- # killprocess 80457 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@952 -- # '[' -z 80457 ']' 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@956 -- # kill -0 80457 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@957 -- # uname 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80457 00:28:16.697 killing process with pid 80457 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80457' 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@971 -- # kill 80457 00:28:16.697 10:28:22 ftl -- common/autotest_common.sh@976 -- # wait 80457 00:28:18.072 10:28:23 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:18.072 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:18.330 Waiting for block devices as requested 00:28:18.330 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.330 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.330 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:18.330 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:23.602 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:23.602 10:28:29 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:23.602 Remove shared memory files 00:28:23.602 10:28:29 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:23.602 10:28:29 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:23.602 10:28:29 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:23.602 10:28:29 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:23.602 10:28:29 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:23.602 10:28:29 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:23.602 ************************************ 00:28:23.602 END TEST ftl 00:28:23.602 ************************************ 00:28:23.602 00:28:23.602 real 12m3.354s 00:28:23.602 user 14m14.077s 00:28:23.602 sys 1m12.706s 00:28:23.603 10:28:29 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:23.603 10:28:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:23.603 10:28:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:23.603 10:28:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:23.603 10:28:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:23.603 10:28:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:23.603 10:28:29 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:23.603 10:28:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:23.603 10:28:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:23.603 10:28:29 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:23.603 10:28:29 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:23.603 10:28:29 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:23.603 10:28:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:23.603 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:28:23.603 10:28:29 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:23.603 10:28:29 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:28:23.603 10:28:29 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:28:23.603 10:28:29 -- common/autotest_common.sh@10 -- # set +x 00:28:24.534 INFO: APP EXITING 00:28:24.534 INFO: killing all VMs 00:28:24.534 INFO: killing vhost app 00:28:24.534 INFO: EXIT DONE 00:28:25.100 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.358 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:25.358 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:25.358 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:25.358 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:25.616 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:25.873 Cleaning 00:28:25.873 Removing: /var/run/dpdk/spdk0/config 00:28:25.873 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:25.873 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:25.873 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:25.873 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:25.873 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:25.873 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:25.873 Removing: /var/run/dpdk/spdk0 00:28:25.873 Removing: /var/run/dpdk/spdk_pid56919 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57127 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57339 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57432 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57472 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57589 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57607 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57806 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57899 00:28:25.873 Removing: /var/run/dpdk/spdk_pid57995 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58100 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58192 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58237 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58268 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58344 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58428 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58864 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58917 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58975 00:28:25.873 Removing: /var/run/dpdk/spdk_pid58991 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59098 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59114 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59216 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59232 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59291 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59309 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59362 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59380 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59545 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59582 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59665 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59843 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59927 00:28:25.873 Removing: /var/run/dpdk/spdk_pid59958 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60401 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60495 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60606 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60661 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60681 00:28:25.873 Removing: /var/run/dpdk/spdk_pid60765 00:28:25.874 Removing: /var/run/dpdk/spdk_pid61395 00:28:25.874 Removing: /var/run/dpdk/spdk_pid61432 00:28:25.874 Removing: /var/run/dpdk/spdk_pid61905 00:28:26.130 Removing: /var/run/dpdk/spdk_pid62003 00:28:26.130 Removing: /var/run/dpdk/spdk_pid62123 00:28:26.130 Removing: /var/run/dpdk/spdk_pid62176 00:28:26.130 Removing: /var/run/dpdk/spdk_pid62202 00:28:26.130 Removing: /var/run/dpdk/spdk_pid62227 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64074 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64206 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64210 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64227 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64273 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64277 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64289 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64334 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64338 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64350 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64396 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64400 00:28:26.131 Removing: /var/run/dpdk/spdk_pid64412 00:28:26.131 Removing: /var/run/dpdk/spdk_pid65773 00:28:26.131 Removing: /var/run/dpdk/spdk_pid65870 00:28:26.131 Removing: /var/run/dpdk/spdk_pid67270 00:28:26.131 Removing: /var/run/dpdk/spdk_pid68648 00:28:26.131 Removing: /var/run/dpdk/spdk_pid68736 00:28:26.131 Removing: /var/run/dpdk/spdk_pid68812 00:28:26.131 Removing: /var/run/dpdk/spdk_pid68894 00:28:26.131 Removing: /var/run/dpdk/spdk_pid68993 00:28:26.131 Removing: /var/run/dpdk/spdk_pid69067 00:28:26.131 Removing: /var/run/dpdk/spdk_pid69209 00:28:26.131 Removing: /var/run/dpdk/spdk_pid69564 00:28:26.131 Removing: /var/run/dpdk/spdk_pid69595 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70030 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70217 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70318 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70423 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70472 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70497 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70799 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70848 00:28:26.131 Removing: /var/run/dpdk/spdk_pid70921 00:28:26.131 Removing: /var/run/dpdk/spdk_pid71311 00:28:26.131 Removing: /var/run/dpdk/spdk_pid71455 00:28:26.131 Removing: /var/run/dpdk/spdk_pid72259 00:28:26.131 Removing: /var/run/dpdk/spdk_pid72385 00:28:26.131 Removing: /var/run/dpdk/spdk_pid72557 00:28:26.131 Removing: /var/run/dpdk/spdk_pid72655 00:28:26.131 Removing: /var/run/dpdk/spdk_pid72949 00:28:26.131 Removing: /var/run/dpdk/spdk_pid73201 00:28:26.131 Removing: /var/run/dpdk/spdk_pid73550 00:28:26.131 Removing: /var/run/dpdk/spdk_pid73733 00:28:26.131 Removing: /var/run/dpdk/spdk_pid73885 00:28:26.131 Removing: /var/run/dpdk/spdk_pid73933 00:28:26.131 Removing: /var/run/dpdk/spdk_pid74066 00:28:26.131 Removing: /var/run/dpdk/spdk_pid74097 00:28:26.131 Removing: /var/run/dpdk/spdk_pid74144 00:28:26.131 Removing: /var/run/dpdk/spdk_pid74308 00:28:26.131 Removing: /var/run/dpdk/spdk_pid74527 00:28:26.131 Removing: /var/run/dpdk/spdk_pid75016 00:28:26.131 Removing: /var/run/dpdk/spdk_pid75855 00:28:26.131 Removing: /var/run/dpdk/spdk_pid76466 00:28:26.131 Removing: /var/run/dpdk/spdk_pid77344 00:28:26.131 Removing: /var/run/dpdk/spdk_pid77476 00:28:26.131 Removing: /var/run/dpdk/spdk_pid77568 00:28:26.131 Removing: /var/run/dpdk/spdk_pid77983 00:28:26.131 Removing: /var/run/dpdk/spdk_pid78037 00:28:26.131 Removing: /var/run/dpdk/spdk_pid78441 00:28:26.131 Removing: /var/run/dpdk/spdk_pid78723 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79476 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79604 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79646 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79711 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79765 00:28:26.131 Removing: /var/run/dpdk/spdk_pid79829 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80026 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80095 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80163 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80224 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80259 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80320 00:28:26.131 Removing: /var/run/dpdk/spdk_pid80457 00:28:26.131 Clean 00:28:26.131 10:28:31 -- common/autotest_common.sh@1451 -- # return 0 00:28:26.131 10:28:31 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:26.131 10:28:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.131 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:28:26.131 10:28:31 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:26.131 10:28:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:26.131 10:28:31 -- common/autotest_common.sh@10 -- # set +x 00:28:26.388 10:28:31 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:26.388 10:28:31 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:26.388 10:28:31 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:26.388 10:28:31 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:26.388 10:28:31 -- spdk/autotest.sh@394 -- # hostname 00:28:26.388 10:28:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:26.388 geninfo: WARNING: invalid characters removed from testname! 00:28:52.968 10:28:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:52.968 10:28:58 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:55.511 10:29:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:58.083 10:29:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:28:59.979 10:29:05 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:02.509 10:29:07 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.034 10:29:10 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:05.034 10:29:10 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:05.034 10:29:10 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:05.034 10:29:10 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:05.034 10:29:10 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:05.034 10:29:10 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:05.034 + [[ -n 5027 ]] 00:29:05.034 + sudo kill 5027 00:29:05.041 [Pipeline] } 00:29:05.056 [Pipeline] // timeout 00:29:05.061 [Pipeline] } 00:29:05.074 [Pipeline] // stage 00:29:05.080 [Pipeline] } 00:29:05.094 [Pipeline] // catchError 00:29:05.104 [Pipeline] stage 00:29:05.107 [Pipeline] { (Stop VM) 00:29:05.121 [Pipeline] sh 00:29:05.397 + vagrant halt 00:29:07.951 ==> default: Halting domain... 00:29:11.247 [Pipeline] sh 00:29:11.531 + vagrant destroy -f 00:29:14.073 ==> default: Removing domain... 00:29:14.343 [Pipeline] sh 00:29:14.622 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:14.631 [Pipeline] } 00:29:14.645 [Pipeline] // stage 00:29:14.650 [Pipeline] } 00:29:14.664 [Pipeline] // dir 00:29:14.669 [Pipeline] } 00:29:14.685 [Pipeline] // wrap 00:29:14.692 [Pipeline] } 00:29:14.705 [Pipeline] // catchError 00:29:14.715 [Pipeline] stage 00:29:14.718 [Pipeline] { (Epilogue) 00:29:14.732 [Pipeline] sh 00:29:15.011 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:20.320 [Pipeline] catchError 00:29:20.322 [Pipeline] { 00:29:20.336 [Pipeline] sh 00:29:20.616 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:20.616 Artifacts sizes are good 00:29:20.624 [Pipeline] } 00:29:20.638 [Pipeline] // catchError 00:29:20.650 [Pipeline] archiveArtifacts 00:29:20.657 Archiving artifacts 00:29:20.773 [Pipeline] cleanWs 00:29:20.787 [WS-CLEANUP] Deleting project workspace... 00:29:20.787 [WS-CLEANUP] Deferred wipeout is used... 00:29:20.793 [WS-CLEANUP] done 00:29:20.795 [Pipeline] } 00:29:20.813 [Pipeline] // stage 00:29:20.819 [Pipeline] } 00:29:20.834 [Pipeline] // node 00:29:20.839 [Pipeline] End of Pipeline 00:29:20.874 Finished: SUCCESS