00:00:00.000 Started by upstream project "autotest-per-patch" build number 132062 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.063 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.064 The recommended git tool is: git 00:00:00.064 using credential 00000000-0000-0000-0000-000000000002 00:00:00.066 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.103 Fetching changes from the remote Git repository 00:00:00.105 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.153 Using shallow fetch with depth 1 00:00:00.153 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.154 > git --version # timeout=10 00:00:00.206 > git --version # 'git version 2.39.2' 00:00:00.206 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.244 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.244 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.409 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.418 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.430 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:05.430 > git config core.sparsecheckout # timeout=10 00:00:05.442 > git read-tree -mu HEAD # timeout=10 00:00:05.458 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:05.479 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:05.479 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:05.564 [Pipeline] Start of Pipeline 00:00:05.575 [Pipeline] library 00:00:05.576 Loading library shm_lib@master 00:00:05.576 Library shm_lib@master is cached. Copying from home. 00:00:05.593 [Pipeline] node 00:00:05.608 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest_3 00:00:05.610 [Pipeline] { 00:00:05.620 [Pipeline] catchError 00:00:05.622 [Pipeline] { 00:00:05.632 [Pipeline] wrap 00:00:05.638 [Pipeline] { 00:00:05.644 [Pipeline] stage 00:00:05.645 [Pipeline] { (Prologue) 00:00:05.663 [Pipeline] echo 00:00:05.665 Node: VM-host-SM38 00:00:05.671 [Pipeline] cleanWs 00:00:05.681 [WS-CLEANUP] Deleting project workspace... 00:00:05.681 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.689 [WS-CLEANUP] done 00:00:05.867 [Pipeline] setCustomBuildProperty 00:00:05.961 [Pipeline] httpRequest 00:00:06.783 [Pipeline] echo 00:00:06.784 Sorcerer 10.211.164.101 is alive 00:00:06.795 [Pipeline] retry 00:00:06.797 [Pipeline] { 00:00:06.811 [Pipeline] httpRequest 00:00:06.816 HttpMethod: GET 00:00:06.816 URL: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.817 Sending request to url: http://10.211.164.101/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:06.826 Response Code: HTTP/1.1 200 OK 00:00:06.826 Success: Status code 200 is in the accepted range: 200,404 00:00:06.827 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.052 [Pipeline] } 00:00:14.069 [Pipeline] // retry 00:00:14.077 [Pipeline] sh 00:00:14.363 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:14.378 [Pipeline] httpRequest 00:00:15.606 [Pipeline] echo 00:00:15.608 Sorcerer 10.211.164.101 is alive 00:00:15.619 [Pipeline] retry 00:00:15.620 [Pipeline] { 00:00:15.632 [Pipeline] httpRequest 00:00:15.638 HttpMethod: GET 00:00:15.639 URL: http://10.211.164.101/packages/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:00:15.639 Sending request to url: http://10.211.164.101/packages/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:00:15.646 Response Code: HTTP/1.1 200 OK 00:00:15.647 Success: Status code 200 is in the accepted range: 200,404 00:00:15.648 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_3/spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:01:50.438 [Pipeline] } 00:01:50.455 [Pipeline] // retry 00:01:50.462 [Pipeline] sh 00:01:50.773 + tar --no-same-owner -xf spdk_1aeff8917b2f794105f6695e771cf5d68f6d7ab5.tar.gz 00:01:53.307 [Pipeline] sh 00:01:53.615 + git -C spdk log --oneline -n5 00:01:53.615 1aeff8917 lib/reduce: Add a chunk data read/write cache 00:01:53.615 fa3ab7384 bdev/raid: Fix raid_bdev->sb null pointer 00:01:53.615 12fc2abf1 test: Remove autopackage.sh 00:01:53.615 83ba90867 fio/bdev: fix typo in README 00:01:53.615 45379ed84 module/compress: Cleanup vol data, when claim fails 00:01:53.633 [Pipeline] writeFile 00:01:53.648 [Pipeline] sh 00:01:53.928 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:53.938 [Pipeline] sh 00:01:54.215 + cat autorun-spdk.conf 00:01:54.215 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.215 SPDK_TEST_NVME=1 00:01:54.215 SPDK_TEST_FTL=1 00:01:54.215 SPDK_TEST_ISAL=1 00:01:54.215 SPDK_RUN_ASAN=1 00:01:54.215 SPDK_RUN_UBSAN=1 00:01:54.215 SPDK_TEST_XNVME=1 00:01:54.215 SPDK_TEST_NVME_FDP=1 00:01:54.215 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.221 RUN_NIGHTLY=0 00:01:54.223 [Pipeline] } 00:01:54.236 [Pipeline] // stage 00:01:54.252 [Pipeline] stage 00:01:54.254 [Pipeline] { (Run VM) 00:01:54.266 [Pipeline] sh 00:01:54.546 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:54.546 + echo 'Start stage prepare_nvme.sh' 00:01:54.546 Start stage prepare_nvme.sh 00:01:54.546 + [[ -n 4 ]] 00:01:54.546 + disk_prefix=ex4 00:01:54.546 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_3 ]] 00:01:54.546 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf ]] 00:01:54.546 + source /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf 00:01:54.546 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:54.546 ++ SPDK_TEST_NVME=1 00:01:54.546 ++ SPDK_TEST_FTL=1 00:01:54.546 ++ SPDK_TEST_ISAL=1 00:01:54.546 ++ SPDK_RUN_ASAN=1 00:01:54.546 ++ SPDK_RUN_UBSAN=1 00:01:54.546 ++ SPDK_TEST_XNVME=1 00:01:54.546 ++ SPDK_TEST_NVME_FDP=1 00:01:54.546 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:54.546 ++ RUN_NIGHTLY=0 00:01:54.546 + cd /var/jenkins/workspace/nvme-vg-autotest_3 00:01:54.546 + nvme_files=() 00:01:54.546 + declare -A nvme_files 00:01:54.546 + backend_dir=/var/lib/libvirt/images/backends 00:01:54.546 + nvme_files['nvme.img']=5G 00:01:54.546 + nvme_files['nvme-cmb.img']=5G 00:01:54.546 + nvme_files['nvme-multi0.img']=4G 00:01:54.546 + nvme_files['nvme-multi1.img']=4G 00:01:54.546 + nvme_files['nvme-multi2.img']=4G 00:01:54.546 + nvme_files['nvme-openstack.img']=8G 00:01:54.546 + nvme_files['nvme-zns.img']=5G 00:01:54.546 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:54.546 + (( SPDK_TEST_FTL == 1 )) 00:01:54.546 + nvme_files["nvme-ftl.img"]=6G 00:01:54.546 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:54.546 + nvme_files["nvme-fdp.img"]=1G 00:01:54.546 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:54.546 + for nvme in "${!nvme_files[@]}" 00:01:54.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:54.546 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.546 + for nvme in "${!nvme_files[@]}" 00:01:54.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:54.546 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:54.546 + for nvme in "${!nvme_files[@]}" 00:01:54.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:54.546 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.546 + for nvme in "${!nvme_files[@]}" 00:01:54.546 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:54.804 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:54.804 + for nvme in "${!nvme_files[@]}" 00:01:54.804 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:54.805 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.805 + for nvme in "${!nvme_files[@]}" 00:01:54.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:54.805 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.805 + for nvme in "${!nvme_files[@]}" 00:01:54.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:54.805 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:54.805 + for nvme in "${!nvme_files[@]}" 00:01:54.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:54.805 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:54.805 + for nvme in "${!nvme_files[@]}" 00:01:54.805 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:54.805 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:54.805 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:54.805 + echo 'End stage prepare_nvme.sh' 00:01:54.805 End stage prepare_nvme.sh 00:01:54.815 [Pipeline] sh 00:01:55.111 + DISTRO=fedora39 00:01:55.111 + CPUS=10 00:01:55.111 + RAM=12288 00:01:55.111 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:55.111 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:55.111 00:01:55.111 DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant 00:01:55.111 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_3/spdk 00:01:55.111 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_3 00:01:55.111 HELP=0 00:01:55.111 DRY_RUN=0 00:01:55.111 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:55.111 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:55.111 NVME_AUTO_CREATE=0 00:01:55.111 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:55.111 NVME_CMB=,,,, 00:01:55.111 NVME_PMR=,,,, 00:01:55.111 NVME_ZNS=,,,, 00:01:55.111 NVME_MS=true,,,, 00:01:55.111 NVME_FDP=,,,on, 00:01:55.111 SPDK_VAGRANT_DISTRO=fedora39 00:01:55.111 SPDK_VAGRANT_VMCPU=10 00:01:55.111 SPDK_VAGRANT_VMRAM=12288 00:01:55.111 SPDK_VAGRANT_PROVIDER=libvirt 00:01:55.111 SPDK_VAGRANT_HTTP_PROXY= 00:01:55.111 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:55.111 SPDK_OPENSTACK_NETWORK=0 00:01:55.111 VAGRANT_PACKAGE_BOX=0 00:01:55.111 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_3/spdk/scripts/vagrant/Vagrantfile 00:01:55.111 FORCE_DISTRO=true 00:01:55.111 VAGRANT_BOX_VERSION= 00:01:55.111 EXTRA_VAGRANTFILES= 00:01:55.111 NIC_MODEL=e1000 00:01:55.111 00:01:55.111 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt' 00:01:55.111 /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_3 00:01:57.683 Bringing machine 'default' up with 'libvirt' provider... 00:01:57.941 ==> default: Creating image (snapshot of base box volume). 00:01:57.941 ==> default: Creating domain with the following settings... 00:01:57.941 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730805476_63420740ad35061954c0 00:01:57.941 ==> default: -- Domain type: kvm 00:01:57.941 ==> default: -- Cpus: 10 00:01:57.941 ==> default: -- Feature: acpi 00:01:57.941 ==> default: -- Feature: apic 00:01:57.941 ==> default: -- Feature: pae 00:01:57.941 ==> default: -- Memory: 12288M 00:01:57.941 ==> default: -- Memory Backing: hugepages: 00:01:57.941 ==> default: -- Management MAC: 00:01:57.941 ==> default: -- Loader: 00:01:57.941 ==> default: -- Nvram: 00:01:57.941 ==> default: -- Base box: spdk/fedora39 00:01:57.941 ==> default: -- Storage pool: default 00:01:57.941 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730805476_63420740ad35061954c0.img (20G) 00:01:57.941 ==> default: -- Volume Cache: default 00:01:57.941 ==> default: -- Kernel: 00:01:57.941 ==> default: -- Initrd: 00:01:57.941 ==> default: -- Graphics Type: vnc 00:01:57.942 ==> default: -- Graphics Port: -1 00:01:57.942 ==> default: -- Graphics IP: 127.0.0.1 00:01:57.942 ==> default: -- Graphics Password: Not defined 00:01:57.942 ==> default: -- Video Type: cirrus 00:01:57.942 ==> default: -- Video VRAM: 9216 00:01:57.942 ==> default: -- Sound Type: 00:01:57.942 ==> default: -- Keymap: en-us 00:01:57.942 ==> default: -- TPM Path: 00:01:57.942 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:57.942 ==> default: -- Command line args: 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:57.942 ==> default: -> value=-drive, 00:01:57.942 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:57.942 ==> default: -> value=-device, 00:01:57.942 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:57.942 ==> default: Creating shared folders metadata... 00:01:57.942 ==> default: Starting domain. 00:01:58.877 ==> default: Waiting for domain to get an IP address... 00:02:16.955 ==> default: Waiting for SSH to become available... 00:02:17.936 ==> default: Configuring and enabling network interfaces... 00:02:22.127 default: SSH address: 192.168.121.45:22 00:02:22.127 default: SSH username: vagrant 00:02:22.127 default: SSH auth method: private key 00:02:23.502 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:30.075 ==> default: Mounting SSHFS shared folder... 00:02:31.457 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:31.457 ==> default: Checking Mount.. 00:02:32.842 ==> default: Folder Successfully Mounted! 00:02:32.842 00:02:32.842 SUCCESS! 00:02:32.842 00:02:32.842 cd to /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt and type "vagrant ssh" to use. 00:02:32.842 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:32.842 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt" to destroy all trace of vm. 00:02:32.842 00:02:32.851 [Pipeline] } 00:02:32.865 [Pipeline] // stage 00:02:32.875 [Pipeline] dir 00:02:32.876 Running in /var/jenkins/workspace/nvme-vg-autotest_3/fedora39-libvirt 00:02:32.877 [Pipeline] { 00:02:32.890 [Pipeline] catchError 00:02:32.891 [Pipeline] { 00:02:32.903 [Pipeline] sh 00:02:33.180 + vagrant ssh-config --host vagrant 00:02:33.180 + sed -ne '/^Host/,$p' 00:02:33.180 + tee ssh_conf 00:02:35.707 Host vagrant 00:02:35.707 HostName 192.168.121.45 00:02:35.707 User vagrant 00:02:35.707 Port 22 00:02:35.707 UserKnownHostsFile /dev/null 00:02:35.707 StrictHostKeyChecking no 00:02:35.707 PasswordAuthentication no 00:02:35.707 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:35.707 IdentitiesOnly yes 00:02:35.707 LogLevel FATAL 00:02:35.707 ForwardAgent yes 00:02:35.707 ForwardX11 yes 00:02:35.707 00:02:35.720 [Pipeline] withEnv 00:02:35.723 [Pipeline] { 00:02:35.735 [Pipeline] sh 00:02:36.047 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:36.047 source /etc/os-release 00:02:36.047 [[ -e /image.version ]] && img=$(< /image.version) 00:02:36.047 # Minimal, systemd-like check. 00:02:36.047 if [[ -e /.dockerenv ]]; then 00:02:36.047 # Clear garbage from the node'\''s name: 00:02:36.047 # agt-er_autotest_547-896 -> autotest_547-896 00:02:36.047 # $HOSTNAME is the actual container id 00:02:36.047 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:36.047 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:36.047 # We can assume this is a mount from a host where container is running, 00:02:36.047 # so fetch its hostname to easily identify the target swarm worker. 00:02:36.047 container="$(< /etc/hostname) ($agent)" 00:02:36.047 else 00:02:36.047 # Fallback 00:02:36.047 container=$agent 00:02:36.047 fi 00:02:36.047 fi 00:02:36.047 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:36.047 ' 00:02:36.059 [Pipeline] } 00:02:36.075 [Pipeline] // withEnv 00:02:36.084 [Pipeline] setCustomBuildProperty 00:02:36.100 [Pipeline] stage 00:02:36.102 [Pipeline] { (Tests) 00:02:36.119 [Pipeline] sh 00:02:36.402 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:36.418 [Pipeline] sh 00:02:36.700 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:36.713 [Pipeline] timeout 00:02:36.714 Timeout set to expire in 50 min 00:02:36.716 [Pipeline] { 00:02:36.730 [Pipeline] sh 00:02:37.009 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:37.267 HEAD is now at 1aeff8917 lib/reduce: Add a chunk data read/write cache 00:02:37.279 [Pipeline] sh 00:02:37.578 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:37.591 [Pipeline] sh 00:02:37.868 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_3/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:37.884 [Pipeline] sh 00:02:38.171 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:38.171 ++ readlink -f spdk_repo 00:02:38.171 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:38.171 + [[ -n /home/vagrant/spdk_repo ]] 00:02:38.171 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:38.171 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:38.171 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:38.171 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:38.171 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:38.171 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:38.171 + cd /home/vagrant/spdk_repo 00:02:38.171 + source /etc/os-release 00:02:38.171 ++ NAME='Fedora Linux' 00:02:38.171 ++ VERSION='39 (Cloud Edition)' 00:02:38.171 ++ ID=fedora 00:02:38.171 ++ VERSION_ID=39 00:02:38.171 ++ VERSION_CODENAME= 00:02:38.171 ++ PLATFORM_ID=platform:f39 00:02:38.171 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:38.171 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:38.171 ++ LOGO=fedora-logo-icon 00:02:38.171 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:38.171 ++ HOME_URL=https://fedoraproject.org/ 00:02:38.171 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:38.171 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:38.171 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:38.171 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:38.171 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:38.171 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:38.171 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:38.171 ++ SUPPORT_END=2024-11-12 00:02:38.171 ++ VARIANT='Cloud Edition' 00:02:38.171 ++ VARIANT_ID=cloud 00:02:38.171 + uname -a 00:02:38.171 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:38.171 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:38.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:38.738 Hugepages 00:02:38.738 node hugesize free / total 00:02:38.738 node0 1048576kB 0 / 0 00:02:38.738 node0 2048kB 0 / 0 00:02:38.738 00:02:38.738 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:38.738 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:38.996 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:38.996 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:02:38.996 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:02:38.996 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:38.996 + rm -f /tmp/spdk-ld-path 00:02:38.996 + source autorun-spdk.conf 00:02:38.996 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.996 ++ SPDK_TEST_NVME=1 00:02:38.996 ++ SPDK_TEST_FTL=1 00:02:38.996 ++ SPDK_TEST_ISAL=1 00:02:38.996 ++ SPDK_RUN_ASAN=1 00:02:38.996 ++ SPDK_RUN_UBSAN=1 00:02:38.996 ++ SPDK_TEST_XNVME=1 00:02:38.996 ++ SPDK_TEST_NVME_FDP=1 00:02:38.996 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.996 ++ RUN_NIGHTLY=0 00:02:38.996 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:38.996 + [[ -n '' ]] 00:02:38.996 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:38.996 + for M in /var/spdk/build-*-manifest.txt 00:02:38.996 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:38.996 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.996 + for M in /var/spdk/build-*-manifest.txt 00:02:38.996 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:38.996 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.996 + for M in /var/spdk/build-*-manifest.txt 00:02:38.996 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:38.996 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:38.996 ++ uname 00:02:38.996 + [[ Linux == \L\i\n\u\x ]] 00:02:38.996 + sudo dmesg -T 00:02:38.996 + sudo dmesg --clear 00:02:38.996 + dmesg_pid=5024 00:02:38.996 + [[ Fedora Linux == FreeBSD ]] 00:02:38.997 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.997 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:38.997 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:38.997 + sudo dmesg -Tw 00:02:38.997 + [[ -x /usr/src/fio-static/fio ]] 00:02:38.997 + export FIO_BIN=/usr/src/fio-static/fio 00:02:38.997 + FIO_BIN=/usr/src/fio-static/fio 00:02:38.997 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:38.997 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:38.997 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:38.997 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.997 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:38.997 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:38.997 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.997 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:38.997 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.997 11:18:38 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:38.997 11:18:38 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:38.997 11:18:38 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:02:38.997 11:18:38 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:38.997 11:18:38 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:38.997 11:18:38 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:02:38.997 11:18:38 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:38.997 11:18:38 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:38.997 11:18:38 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:38.997 11:18:38 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:38.997 11:18:38 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:38.997 11:18:38 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.997 11:18:38 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.997 11:18:38 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.997 11:18:38 -- paths/export.sh@5 -- $ export PATH 00:02:38.997 11:18:38 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:38.997 11:18:38 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:38.997 11:18:38 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:38.997 11:18:38 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730805518.XXXXXX 00:02:38.997 11:18:38 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730805518.utcw7d 00:02:38.997 11:18:38 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:38.997 11:18:38 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:38.997 11:18:38 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:38.997 11:18:38 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:38.997 11:18:38 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:38.997 11:18:38 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:38.997 11:18:38 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:38.997 11:18:38 -- common/autotest_common.sh@10 -- $ set +x 00:02:39.254 11:18:38 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:39.254 11:18:38 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:39.254 11:18:38 -- pm/common@17 -- $ local monitor 00:02:39.254 11:18:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.254 11:18:38 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:39.254 11:18:38 -- pm/common@21 -- $ date +%s 00:02:39.254 11:18:38 -- pm/common@25 -- $ sleep 1 00:02:39.254 11:18:38 -- pm/common@21 -- $ date +%s 00:02:39.254 11:18:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730805518 00:02:39.254 11:18:38 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730805518 00:02:39.254 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730805518_collect-cpu-load.pm.log 00:02:39.254 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730805518_collect-vmstat.pm.log 00:02:40.184 11:18:39 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:40.184 11:18:39 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:40.184 11:18:39 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:40.184 11:18:39 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:40.184 11:18:39 -- spdk/autobuild.sh@16 -- $ date -u 00:02:40.184 Tue Nov 5 11:18:39 AM UTC 2024 00:02:40.184 11:18:39 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:40.184 v25.01-pre-125-g1aeff8917 00:02:40.184 11:18:39 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:40.184 11:18:39 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:40.184 11:18:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:40.184 11:18:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:40.184 11:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.184 ************************************ 00:02:40.184 START TEST asan 00:02:40.184 ************************************ 00:02:40.184 using asan 00:02:40.184 11:18:39 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:02:40.184 00:02:40.184 real 0m0.000s 00:02:40.184 user 0m0.000s 00:02:40.184 sys 0m0.000s 00:02:40.184 11:18:39 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:40.184 11:18:39 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.184 ************************************ 00:02:40.184 END TEST asan 00:02:40.184 ************************************ 00:02:40.184 11:18:39 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:40.184 11:18:39 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:40.184 11:18:39 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:40.184 11:18:39 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:40.184 11:18:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.184 ************************************ 00:02:40.184 START TEST ubsan 00:02:40.184 ************************************ 00:02:40.184 using ubsan 00:02:40.184 11:18:39 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:02:40.185 00:02:40.185 real 0m0.000s 00:02:40.185 user 0m0.000s 00:02:40.185 sys 0m0.000s 00:02:40.185 11:18:39 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:02:40.185 ************************************ 00:02:40.185 END TEST ubsan 00:02:40.185 11:18:39 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:40.185 ************************************ 00:02:40.185 11:18:39 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:40.185 11:18:39 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:40.185 11:18:39 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:40.185 11:18:39 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:40.185 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:40.185 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:40.748 Using 'verbs' RDMA provider 00:02:51.404 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:01.370 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:01.370 Creating mk/config.mk...done. 00:03:01.370 Creating mk/cc.flags.mk...done. 00:03:01.370 Type 'make' to build. 00:03:01.370 11:19:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:01.370 11:19:00 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:03:01.370 11:19:00 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:03:01.370 11:19:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:01.370 ************************************ 00:03:01.370 START TEST make 00:03:01.370 ************************************ 00:03:01.370 11:19:00 make -- common/autotest_common.sh@1127 -- $ make -j10 00:03:01.628 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:03:01.628 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:03:01.628 meson setup builddir \ 00:03:01.628 -Dwith-libaio=enabled \ 00:03:01.628 -Dwith-liburing=enabled \ 00:03:01.628 -Dwith-libvfn=disabled \ 00:03:01.628 -Dwith-spdk=disabled \ 00:03:01.628 -Dexamples=false \ 00:03:01.628 -Dtests=false \ 00:03:01.628 -Dtools=false && \ 00:03:01.628 meson compile -C builddir && \ 00:03:01.628 cd -) 00:03:01.628 make[1]: Nothing to be done for 'all'. 00:03:03.528 The Meson build system 00:03:03.528 Version: 1.5.0 00:03:03.528 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:03:03.528 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:03.528 Build type: native build 00:03:03.528 Project name: xnvme 00:03:03.528 Project version: 0.7.5 00:03:03.528 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:03.528 C linker for the host machine: cc ld.bfd 2.40-14 00:03:03.528 Host machine cpu family: x86_64 00:03:03.528 Host machine cpu: x86_64 00:03:03.528 Message: host_machine.system: linux 00:03:03.528 Compiler for C supports arguments -Wno-missing-braces: YES 00:03:03.528 Compiler for C supports arguments -Wno-cast-function-type: YES 00:03:03.529 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:03:03.529 Run-time dependency threads found: YES 00:03:03.529 Has header "setupapi.h" : NO 00:03:03.529 Has header "linux/blkzoned.h" : YES 00:03:03.529 Has header "linux/blkzoned.h" : YES (cached) 00:03:03.529 Has header "libaio.h" : YES 00:03:03.529 Library aio found: YES 00:03:03.529 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:03.529 Run-time dependency liburing found: YES 2.2 00:03:03.529 Dependency libvfn skipped: feature with-libvfn disabled 00:03:03.529 Found CMake: /usr/bin/cmake (3.27.7) 00:03:03.529 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:03:03.529 Subproject spdk : skipped: feature with-spdk disabled 00:03:03.529 Run-time dependency appleframeworks found: NO (tried framework) 00:03:03.529 Run-time dependency appleframeworks found: NO (tried framework) 00:03:03.529 Library rt found: YES 00:03:03.529 Checking for function "clock_gettime" with dependency -lrt: YES 00:03:03.529 Configuring xnvme_config.h using configuration 00:03:03.529 Configuring xnvme.spec using configuration 00:03:03.529 Run-time dependency bash-completion found: YES 2.11 00:03:03.529 Message: Bash-completions: /usr/share/bash-completion/completions 00:03:03.529 Program cp found: YES (/usr/bin/cp) 00:03:03.529 Build targets in project: 3 00:03:03.529 00:03:03.529 xnvme 0.7.5 00:03:03.529 00:03:03.529 Subprojects 00:03:03.529 spdk : NO Feature 'with-spdk' disabled 00:03:03.529 00:03:03.529 User defined options 00:03:03.529 examples : false 00:03:03.529 tests : false 00:03:03.529 tools : false 00:03:03.529 with-libaio : enabled 00:03:03.529 with-liburing: enabled 00:03:03.529 with-libvfn : disabled 00:03:03.529 with-spdk : disabled 00:03:03.529 00:03:03.529 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:03.786 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:03:03.786 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:03:03.786 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:03:03.786 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:03:03.786 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:03:03.786 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:03:03.786 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:03:03.786 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:03:03.786 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:03:04.045 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:03:04.045 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:03:04.045 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:03:04.045 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:03:04.045 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:03:04.045 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:03:04.045 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:03:04.045 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:03:04.045 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:03:04.045 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:03:04.045 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:03:04.045 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:03:04.045 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:03:04.045 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:03:04.045 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:03:04.045 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:03:04.045 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:03:04.045 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:03:04.045 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:03:04.045 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:03:04.045 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:03:04.045 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:03:04.045 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:03:04.045 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:03:04.045 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:03:04.045 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:03:04.045 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:03:04.045 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:03:04.045 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:03:04.045 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:03:04.045 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:03:04.045 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:03:04.045 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:03:04.045 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:03:04.303 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:03:04.303 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:03:04.303 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:03:04.303 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:03:04.303 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:03:04.303 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:03:04.303 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:03:04.303 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:03:04.303 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:03:04.303 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:03:04.303 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:03:04.303 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:03:04.303 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:03:04.303 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:03:04.303 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:03:04.303 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:03:04.303 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:03:04.303 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:03:04.303 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:03:04.303 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:03:04.303 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:03:04.303 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:03:04.303 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:03:04.303 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:03:04.303 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:03:04.303 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:03:04.561 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:03:04.561 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:03:04.561 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:03:04.561 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:03:04.561 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:03:04.819 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:03:04.819 [75/76] Linking static target lib/libxnvme.a 00:03:04.819 [76/76] Linking target lib/libxnvme.so.0.7.5 00:03:04.819 INFO: autodetecting backend as ninja 00:03:04.819 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:03:04.819 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:03:12.939 The Meson build system 00:03:12.939 Version: 1.5.0 00:03:12.939 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:12.939 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:12.939 Build type: native build 00:03:12.939 Program cat found: YES (/usr/bin/cat) 00:03:12.939 Project name: DPDK 00:03:12.939 Project version: 24.03.0 00:03:12.939 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:12.939 C linker for the host machine: cc ld.bfd 2.40-14 00:03:12.939 Host machine cpu family: x86_64 00:03:12.939 Host machine cpu: x86_64 00:03:12.939 Message: ## Building in Developer Mode ## 00:03:12.939 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:12.939 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:12.939 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:12.939 Program python3 found: YES (/usr/bin/python3) 00:03:12.939 Program cat found: YES (/usr/bin/cat) 00:03:12.939 Compiler for C supports arguments -march=native: YES 00:03:12.939 Checking for size of "void *" : 8 00:03:12.939 Checking for size of "void *" : 8 (cached) 00:03:12.939 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:12.939 Library m found: YES 00:03:12.939 Library numa found: YES 00:03:12.939 Has header "numaif.h" : YES 00:03:12.939 Library fdt found: NO 00:03:12.939 Library execinfo found: NO 00:03:12.939 Has header "execinfo.h" : YES 00:03:12.939 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:12.939 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:12.939 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:12.939 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:12.939 Run-time dependency openssl found: YES 3.1.1 00:03:12.939 Run-time dependency libpcap found: YES 1.10.4 00:03:12.939 Has header "pcap.h" with dependency libpcap: YES 00:03:12.939 Compiler for C supports arguments -Wcast-qual: YES 00:03:12.939 Compiler for C supports arguments -Wdeprecated: YES 00:03:12.939 Compiler for C supports arguments -Wformat: YES 00:03:12.939 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:12.939 Compiler for C supports arguments -Wformat-security: NO 00:03:12.939 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.939 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:12.939 Compiler for C supports arguments -Wnested-externs: YES 00:03:12.939 Compiler for C supports arguments -Wold-style-definition: YES 00:03:12.939 Compiler for C supports arguments -Wpointer-arith: YES 00:03:12.939 Compiler for C supports arguments -Wsign-compare: YES 00:03:12.939 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:12.939 Compiler for C supports arguments -Wundef: YES 00:03:12.939 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.939 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:12.939 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:12.939 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.939 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:12.939 Program objdump found: YES (/usr/bin/objdump) 00:03:12.939 Compiler for C supports arguments -mavx512f: YES 00:03:12.939 Checking if "AVX512 checking" compiles: YES 00:03:12.939 Fetching value of define "__SSE4_2__" : 1 00:03:12.939 Fetching value of define "__AES__" : 1 00:03:12.939 Fetching value of define "__AVX__" : 1 00:03:12.939 Fetching value of define "__AVX2__" : 1 00:03:12.939 Fetching value of define "__AVX512BW__" : 1 00:03:12.939 Fetching value of define "__AVX512CD__" : 1 00:03:12.939 Fetching value of define "__AVX512DQ__" : 1 00:03:12.939 Fetching value of define "__AVX512F__" : 1 00:03:12.939 Fetching value of define "__AVX512VL__" : 1 00:03:12.939 Fetching value of define "__PCLMUL__" : 1 00:03:12.939 Fetching value of define "__RDRND__" : 1 00:03:12.939 Fetching value of define "__RDSEED__" : 1 00:03:12.939 Fetching value of define "__VPCLMULQDQ__" : 1 00:03:12.939 Fetching value of define "__znver1__" : (undefined) 00:03:12.939 Fetching value of define "__znver2__" : (undefined) 00:03:12.939 Fetching value of define "__znver3__" : (undefined) 00:03:12.939 Fetching value of define "__znver4__" : (undefined) 00:03:12.939 Library asan found: YES 00:03:12.939 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:12.939 Message: lib/log: Defining dependency "log" 00:03:12.939 Message: lib/kvargs: Defining dependency "kvargs" 00:03:12.939 Message: lib/telemetry: Defining dependency "telemetry" 00:03:12.939 Library rt found: YES 00:03:12.939 Checking for function "getentropy" : NO 00:03:12.939 Message: lib/eal: Defining dependency "eal" 00:03:12.939 Message: lib/ring: Defining dependency "ring" 00:03:12.939 Message: lib/rcu: Defining dependency "rcu" 00:03:12.939 Message: lib/mempool: Defining dependency "mempool" 00:03:12.939 Message: lib/mbuf: Defining dependency "mbuf" 00:03:12.939 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:12.939 Fetching value of define "__AVX512F__" : 1 (cached) 00:03:12.939 Fetching value of define "__AVX512BW__" : 1 (cached) 00:03:12.939 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:03:12.939 Fetching value of define "__AVX512VL__" : 1 (cached) 00:03:12.939 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:03:12.939 Compiler for C supports arguments -mpclmul: YES 00:03:12.939 Compiler for C supports arguments -maes: YES 00:03:12.939 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:12.939 Compiler for C supports arguments -mavx512bw: YES 00:03:12.939 Compiler for C supports arguments -mavx512dq: YES 00:03:12.939 Compiler for C supports arguments -mavx512vl: YES 00:03:12.939 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:12.939 Compiler for C supports arguments -mavx2: YES 00:03:12.939 Compiler for C supports arguments -mavx: YES 00:03:12.939 Message: lib/net: Defining dependency "net" 00:03:12.939 Message: lib/meter: Defining dependency "meter" 00:03:12.939 Message: lib/ethdev: Defining dependency "ethdev" 00:03:12.939 Message: lib/pci: Defining dependency "pci" 00:03:12.939 Message: lib/cmdline: Defining dependency "cmdline" 00:03:12.939 Message: lib/hash: Defining dependency "hash" 00:03:12.939 Message: lib/timer: Defining dependency "timer" 00:03:12.939 Message: lib/compressdev: Defining dependency "compressdev" 00:03:12.939 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:12.939 Message: lib/dmadev: Defining dependency "dmadev" 00:03:12.939 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:12.939 Message: lib/power: Defining dependency "power" 00:03:12.939 Message: lib/reorder: Defining dependency "reorder" 00:03:12.939 Message: lib/security: Defining dependency "security" 00:03:12.939 Has header "linux/userfaultfd.h" : YES 00:03:12.939 Has header "linux/vduse.h" : YES 00:03:12.939 Message: lib/vhost: Defining dependency "vhost" 00:03:12.939 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:12.939 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:12.939 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:12.939 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:12.939 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:12.939 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:12.939 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:12.939 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:12.939 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:12.939 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:12.939 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:12.939 Configuring doxy-api-html.conf using configuration 00:03:12.939 Configuring doxy-api-man.conf using configuration 00:03:12.939 Program mandb found: YES (/usr/bin/mandb) 00:03:12.939 Program sphinx-build found: NO 00:03:12.939 Configuring rte_build_config.h using configuration 00:03:12.939 Message: 00:03:12.939 ================= 00:03:12.939 Applications Enabled 00:03:12.939 ================= 00:03:12.940 00:03:12.940 apps: 00:03:12.940 00:03:12.940 00:03:12.940 Message: 00:03:12.940 ================= 00:03:12.940 Libraries Enabled 00:03:12.940 ================= 00:03:12.940 00:03:12.940 libs: 00:03:12.940 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:12.940 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:12.940 cryptodev, dmadev, power, reorder, security, vhost, 00:03:12.940 00:03:12.940 Message: 00:03:12.940 =============== 00:03:12.940 Drivers Enabled 00:03:12.940 =============== 00:03:12.940 00:03:12.940 common: 00:03:12.940 00:03:12.940 bus: 00:03:12.940 pci, vdev, 00:03:12.940 mempool: 00:03:12.940 ring, 00:03:12.940 dma: 00:03:12.940 00:03:12.940 net: 00:03:12.940 00:03:12.940 crypto: 00:03:12.940 00:03:12.940 compress: 00:03:12.940 00:03:12.940 vdpa: 00:03:12.940 00:03:12.940 00:03:12.940 Message: 00:03:12.940 ================= 00:03:12.940 Content Skipped 00:03:12.940 ================= 00:03:12.940 00:03:12.940 apps: 00:03:12.940 dumpcap: explicitly disabled via build config 00:03:12.940 graph: explicitly disabled via build config 00:03:12.940 pdump: explicitly disabled via build config 00:03:12.940 proc-info: explicitly disabled via build config 00:03:12.940 test-acl: explicitly disabled via build config 00:03:12.940 test-bbdev: explicitly disabled via build config 00:03:12.940 test-cmdline: explicitly disabled via build config 00:03:12.940 test-compress-perf: explicitly disabled via build config 00:03:12.940 test-crypto-perf: explicitly disabled via build config 00:03:12.940 test-dma-perf: explicitly disabled via build config 00:03:12.940 test-eventdev: explicitly disabled via build config 00:03:12.940 test-fib: explicitly disabled via build config 00:03:12.940 test-flow-perf: explicitly disabled via build config 00:03:12.940 test-gpudev: explicitly disabled via build config 00:03:12.940 test-mldev: explicitly disabled via build config 00:03:12.940 test-pipeline: explicitly disabled via build config 00:03:12.940 test-pmd: explicitly disabled via build config 00:03:12.940 test-regex: explicitly disabled via build config 00:03:12.940 test-sad: explicitly disabled via build config 00:03:12.940 test-security-perf: explicitly disabled via build config 00:03:12.940 00:03:12.940 libs: 00:03:12.940 argparse: explicitly disabled via build config 00:03:12.940 metrics: explicitly disabled via build config 00:03:12.940 acl: explicitly disabled via build config 00:03:12.940 bbdev: explicitly disabled via build config 00:03:12.940 bitratestats: explicitly disabled via build config 00:03:12.940 bpf: explicitly disabled via build config 00:03:12.940 cfgfile: explicitly disabled via build config 00:03:12.940 distributor: explicitly disabled via build config 00:03:12.940 efd: explicitly disabled via build config 00:03:12.940 eventdev: explicitly disabled via build config 00:03:12.940 dispatcher: explicitly disabled via build config 00:03:12.940 gpudev: explicitly disabled via build config 00:03:12.940 gro: explicitly disabled via build config 00:03:12.940 gso: explicitly disabled via build config 00:03:12.940 ip_frag: explicitly disabled via build config 00:03:12.940 jobstats: explicitly disabled via build config 00:03:12.940 latencystats: explicitly disabled via build config 00:03:12.940 lpm: explicitly disabled via build config 00:03:12.940 member: explicitly disabled via build config 00:03:12.940 pcapng: explicitly disabled via build config 00:03:12.940 rawdev: explicitly disabled via build config 00:03:12.940 regexdev: explicitly disabled via build config 00:03:12.940 mldev: explicitly disabled via build config 00:03:12.940 rib: explicitly disabled via build config 00:03:12.940 sched: explicitly disabled via build config 00:03:12.940 stack: explicitly disabled via build config 00:03:12.940 ipsec: explicitly disabled via build config 00:03:12.940 pdcp: explicitly disabled via build config 00:03:12.940 fib: explicitly disabled via build config 00:03:12.940 port: explicitly disabled via build config 00:03:12.940 pdump: explicitly disabled via build config 00:03:12.940 table: explicitly disabled via build config 00:03:12.940 pipeline: explicitly disabled via build config 00:03:12.940 graph: explicitly disabled via build config 00:03:12.940 node: explicitly disabled via build config 00:03:12.940 00:03:12.940 drivers: 00:03:12.940 common/cpt: not in enabled drivers build config 00:03:12.940 common/dpaax: not in enabled drivers build config 00:03:12.940 common/iavf: not in enabled drivers build config 00:03:12.940 common/idpf: not in enabled drivers build config 00:03:12.940 common/ionic: not in enabled drivers build config 00:03:12.940 common/mvep: not in enabled drivers build config 00:03:12.940 common/octeontx: not in enabled drivers build config 00:03:12.940 bus/auxiliary: not in enabled drivers build config 00:03:12.940 bus/cdx: not in enabled drivers build config 00:03:12.940 bus/dpaa: not in enabled drivers build config 00:03:12.940 bus/fslmc: not in enabled drivers build config 00:03:12.940 bus/ifpga: not in enabled drivers build config 00:03:12.940 bus/platform: not in enabled drivers build config 00:03:12.940 bus/uacce: not in enabled drivers build config 00:03:12.940 bus/vmbus: not in enabled drivers build config 00:03:12.940 common/cnxk: not in enabled drivers build config 00:03:12.940 common/mlx5: not in enabled drivers build config 00:03:12.940 common/nfp: not in enabled drivers build config 00:03:12.940 common/nitrox: not in enabled drivers build config 00:03:12.940 common/qat: not in enabled drivers build config 00:03:12.940 common/sfc_efx: not in enabled drivers build config 00:03:12.940 mempool/bucket: not in enabled drivers build config 00:03:12.940 mempool/cnxk: not in enabled drivers build config 00:03:12.940 mempool/dpaa: not in enabled drivers build config 00:03:12.940 mempool/dpaa2: not in enabled drivers build config 00:03:12.940 mempool/octeontx: not in enabled drivers build config 00:03:12.940 mempool/stack: not in enabled drivers build config 00:03:12.940 dma/cnxk: not in enabled drivers build config 00:03:12.940 dma/dpaa: not in enabled drivers build config 00:03:12.940 dma/dpaa2: not in enabled drivers build config 00:03:12.940 dma/hisilicon: not in enabled drivers build config 00:03:12.940 dma/idxd: not in enabled drivers build config 00:03:12.940 dma/ioat: not in enabled drivers build config 00:03:12.940 dma/skeleton: not in enabled drivers build config 00:03:12.940 net/af_packet: not in enabled drivers build config 00:03:12.940 net/af_xdp: not in enabled drivers build config 00:03:12.940 net/ark: not in enabled drivers build config 00:03:12.940 net/atlantic: not in enabled drivers build config 00:03:12.940 net/avp: not in enabled drivers build config 00:03:12.940 net/axgbe: not in enabled drivers build config 00:03:12.940 net/bnx2x: not in enabled drivers build config 00:03:12.940 net/bnxt: not in enabled drivers build config 00:03:12.940 net/bonding: not in enabled drivers build config 00:03:12.940 net/cnxk: not in enabled drivers build config 00:03:12.940 net/cpfl: not in enabled drivers build config 00:03:12.940 net/cxgbe: not in enabled drivers build config 00:03:12.940 net/dpaa: not in enabled drivers build config 00:03:12.940 net/dpaa2: not in enabled drivers build config 00:03:12.940 net/e1000: not in enabled drivers build config 00:03:12.940 net/ena: not in enabled drivers build config 00:03:12.940 net/enetc: not in enabled drivers build config 00:03:12.940 net/enetfec: not in enabled drivers build config 00:03:12.940 net/enic: not in enabled drivers build config 00:03:12.940 net/failsafe: not in enabled drivers build config 00:03:12.940 net/fm10k: not in enabled drivers build config 00:03:12.940 net/gve: not in enabled drivers build config 00:03:12.940 net/hinic: not in enabled drivers build config 00:03:12.940 net/hns3: not in enabled drivers build config 00:03:12.940 net/i40e: not in enabled drivers build config 00:03:12.940 net/iavf: not in enabled drivers build config 00:03:12.940 net/ice: not in enabled drivers build config 00:03:12.940 net/idpf: not in enabled drivers build config 00:03:12.940 net/igc: not in enabled drivers build config 00:03:12.940 net/ionic: not in enabled drivers build config 00:03:12.940 net/ipn3ke: not in enabled drivers build config 00:03:12.940 net/ixgbe: not in enabled drivers build config 00:03:12.940 net/mana: not in enabled drivers build config 00:03:12.940 net/memif: not in enabled drivers build config 00:03:12.940 net/mlx4: not in enabled drivers build config 00:03:12.940 net/mlx5: not in enabled drivers build config 00:03:12.940 net/mvneta: not in enabled drivers build config 00:03:12.940 net/mvpp2: not in enabled drivers build config 00:03:12.940 net/netvsc: not in enabled drivers build config 00:03:12.940 net/nfb: not in enabled drivers build config 00:03:12.940 net/nfp: not in enabled drivers build config 00:03:12.940 net/ngbe: not in enabled drivers build config 00:03:12.940 net/null: not in enabled drivers build config 00:03:12.940 net/octeontx: not in enabled drivers build config 00:03:12.940 net/octeon_ep: not in enabled drivers build config 00:03:12.940 net/pcap: not in enabled drivers build config 00:03:12.940 net/pfe: not in enabled drivers build config 00:03:12.940 net/qede: not in enabled drivers build config 00:03:12.940 net/ring: not in enabled drivers build config 00:03:12.940 net/sfc: not in enabled drivers build config 00:03:12.940 net/softnic: not in enabled drivers build config 00:03:12.940 net/tap: not in enabled drivers build config 00:03:12.940 net/thunderx: not in enabled drivers build config 00:03:12.940 net/txgbe: not in enabled drivers build config 00:03:12.940 net/vdev_netvsc: not in enabled drivers build config 00:03:12.940 net/vhost: not in enabled drivers build config 00:03:12.940 net/virtio: not in enabled drivers build config 00:03:12.940 net/vmxnet3: not in enabled drivers build config 00:03:12.940 raw/*: missing internal dependency, "rawdev" 00:03:12.940 crypto/armv8: not in enabled drivers build config 00:03:12.940 crypto/bcmfs: not in enabled drivers build config 00:03:12.940 crypto/caam_jr: not in enabled drivers build config 00:03:12.940 crypto/ccp: not in enabled drivers build config 00:03:12.940 crypto/cnxk: not in enabled drivers build config 00:03:12.940 crypto/dpaa_sec: not in enabled drivers build config 00:03:12.940 crypto/dpaa2_sec: not in enabled drivers build config 00:03:12.940 crypto/ipsec_mb: not in enabled drivers build config 00:03:12.940 crypto/mlx5: not in enabled drivers build config 00:03:12.940 crypto/mvsam: not in enabled drivers build config 00:03:12.940 crypto/nitrox: not in enabled drivers build config 00:03:12.940 crypto/null: not in enabled drivers build config 00:03:12.940 crypto/octeontx: not in enabled drivers build config 00:03:12.941 crypto/openssl: not in enabled drivers build config 00:03:12.941 crypto/scheduler: not in enabled drivers build config 00:03:12.941 crypto/uadk: not in enabled drivers build config 00:03:12.941 crypto/virtio: not in enabled drivers build config 00:03:12.941 compress/isal: not in enabled drivers build config 00:03:12.941 compress/mlx5: not in enabled drivers build config 00:03:12.941 compress/nitrox: not in enabled drivers build config 00:03:12.941 compress/octeontx: not in enabled drivers build config 00:03:12.941 compress/zlib: not in enabled drivers build config 00:03:12.941 regex/*: missing internal dependency, "regexdev" 00:03:12.941 ml/*: missing internal dependency, "mldev" 00:03:12.941 vdpa/ifc: not in enabled drivers build config 00:03:12.941 vdpa/mlx5: not in enabled drivers build config 00:03:12.941 vdpa/nfp: not in enabled drivers build config 00:03:12.941 vdpa/sfc: not in enabled drivers build config 00:03:12.941 event/*: missing internal dependency, "eventdev" 00:03:12.941 baseband/*: missing internal dependency, "bbdev" 00:03:12.941 gpu/*: missing internal dependency, "gpudev" 00:03:12.941 00:03:12.941 00:03:12.941 Build targets in project: 84 00:03:12.941 00:03:12.941 DPDK 24.03.0 00:03:12.941 00:03:12.941 User defined options 00:03:12.941 buildtype : debug 00:03:12.941 default_library : shared 00:03:12.941 libdir : lib 00:03:12.941 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:12.941 b_sanitize : address 00:03:12.941 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:12.941 c_link_args : 00:03:12.941 cpu_instruction_set: native 00:03:12.941 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:12.941 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:12.941 enable_docs : false 00:03:12.941 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:12.941 enable_kmods : false 00:03:12.941 max_lcores : 128 00:03:12.941 tests : false 00:03:12.941 00:03:12.941 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:12.941 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:12.941 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:12.941 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:12.941 [3/267] Linking static target lib/librte_kvargs.a 00:03:12.941 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:12.941 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:12.941 [6/267] Linking static target lib/librte_log.a 00:03:12.941 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:12.941 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:12.941 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:12.941 [10/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:12.941 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:12.941 [12/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.941 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:12.941 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:13.198 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:13.198 [16/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:13.198 [17/267] Linking static target lib/librte_telemetry.a 00:03:13.198 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:13.456 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:13.456 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:13.456 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:13.456 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:13.456 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:13.456 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:13.456 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:13.456 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:13.456 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:13.456 [28/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.714 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:13.714 [30/267] Linking target lib/librte_log.so.24.1 00:03:13.714 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:13.714 [32/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:13.714 [33/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:13.714 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:13.714 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:13.714 [36/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:13.714 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:13.714 [38/267] Linking target lib/librte_kvargs.so.24.1 00:03:13.973 [39/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.973 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:13.973 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:13.973 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:13.973 [43/267] Linking target lib/librte_telemetry.so.24.1 00:03:13.973 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:13.973 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:13.973 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:13.973 [47/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:13.973 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:13.973 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:13.973 [50/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:14.231 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:14.231 [52/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:14.231 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:14.231 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:14.231 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:14.231 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:14.231 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:14.489 [58/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:14.489 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:14.489 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:14.489 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:14.489 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:14.489 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:14.489 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:14.489 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:14.489 [66/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:14.489 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:14.747 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:14.747 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:14.747 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:14.747 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:14.747 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:15.004 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:15.004 [74/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:15.004 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:15.004 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:15.004 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:15.004 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:15.004 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:15.263 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:15.263 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:15.263 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:15.263 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:15.263 [84/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:15.263 [85/267] Linking static target lib/librte_eal.a 00:03:15.263 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:15.263 [87/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:15.263 [88/267] Linking static target lib/librte_ring.a 00:03:15.522 [89/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:15.522 [90/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:15.522 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:15.522 [92/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:15.522 [93/267] Linking static target lib/librte_rcu.a 00:03:15.522 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:15.522 [95/267] Linking static target lib/librte_mempool.a 00:03:15.522 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:15.781 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:15.781 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:15.781 [99/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.781 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:15.781 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.781 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:16.039 [103/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:16.039 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:16.039 [105/267] Linking static target lib/librte_mbuf.a 00:03:16.039 [106/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:16.039 [107/267] Linking static target lib/librte_meter.a 00:03:16.039 [108/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:03:16.039 [109/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:16.039 [110/267] Linking static target lib/librte_net.a 00:03:16.039 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:16.297 [112/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:16.297 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:16.297 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:16.298 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.298 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.558 [117/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.558 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:16.817 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:16.817 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:16.817 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.817 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:16.817 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:17.075 [124/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:17.075 [125/267] Linking static target lib/librte_pci.a 00:03:17.075 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:17.075 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:17.075 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:17.075 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:17.075 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:17.075 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:17.075 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:17.333 [133/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:17.333 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:17.333 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:17.333 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:17.333 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:17.334 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:17.334 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:17.334 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:17.334 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:17.334 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:17.334 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:17.334 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:17.592 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:17.592 [146/267] Linking static target lib/librte_cmdline.a 00:03:17.592 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:17.592 [148/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:17.850 [149/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:17.850 [150/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:17.850 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:17.850 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:17.850 [153/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:17.850 [154/267] Linking static target lib/librte_timer.a 00:03:18.108 [155/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:18.108 [156/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:18.108 [157/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:18.108 [158/267] Linking static target lib/librte_compressdev.a 00:03:18.108 [159/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:18.108 [160/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:18.366 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:18.366 [162/267] Linking static target lib/librte_hash.a 00:03:18.366 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:18.366 [164/267] Linking static target lib/librte_dmadev.a 00:03:18.366 [165/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.366 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:18.623 [167/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:18.623 [168/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:18.623 [169/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:18.623 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:18.623 [171/267] Linking static target lib/librte_ethdev.a 00:03:18.623 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.623 [173/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.881 [174/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:18.881 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:18.881 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:18.881 [177/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:18.881 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:18.881 [179/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:18.881 [180/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.881 [181/267] Linking static target lib/librte_cryptodev.a 00:03:18.881 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:19.141 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.141 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:19.141 [185/267] Linking static target lib/librte_power.a 00:03:19.436 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:19.436 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:19.436 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:19.436 [189/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:19.436 [190/267] Linking static target lib/librte_reorder.a 00:03:19.436 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:19.436 [192/267] Linking static target lib/librte_security.a 00:03:19.694 [193/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.952 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:19.952 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.952 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:19.952 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:19.952 [198/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.210 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:20.210 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:20.210 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:20.210 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:20.468 [203/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:20.468 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:20.468 [205/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:20.468 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:20.468 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:20.468 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:20.468 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.725 [210/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.725 [211/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.725 [212/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.725 [213/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.725 [214/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.725 [215/267] Linking static target drivers/librte_bus_vdev.a 00:03:20.725 [216/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.725 [217/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.725 [218/267] Linking static target drivers/librte_bus_pci.a 00:03:20.983 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.983 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.983 [221/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.983 [222/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.983 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.983 [224/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.983 [225/267] Linking static target drivers/librte_mempool_ring.a 00:03:21.241 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.807 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:22.374 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.632 [229/267] Linking target lib/librte_eal.so.24.1 00:03:22.632 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:22.632 [231/267] Linking target lib/librte_pci.so.24.1 00:03:22.632 [232/267] Linking target lib/librte_ring.so.24.1 00:03:22.632 [233/267] Linking target lib/librte_meter.so.24.1 00:03:22.632 [234/267] Linking target lib/librte_dmadev.so.24.1 00:03:22.632 [235/267] Linking target lib/librte_timer.so.24.1 00:03:22.632 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:22.891 [237/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:22.891 [238/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:22.891 [239/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:22.891 [240/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:22.891 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:22.891 [242/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:22.891 [243/267] Linking target lib/librte_rcu.so.24.1 00:03:22.891 [244/267] Linking target lib/librte_mempool.so.24.1 00:03:22.891 [245/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:22.891 [246/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:22.891 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:22.891 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:23.149 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:23.149 [250/267] Linking target lib/librte_reorder.so.24.1 00:03:23.149 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:23.149 [252/267] Linking target lib/librte_net.so.24.1 00:03:23.149 [253/267] Linking target lib/librte_cryptodev.so.24.1 00:03:23.149 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:23.149 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:23.149 [256/267] Linking target lib/librte_hash.so.24.1 00:03:23.149 [257/267] Linking target lib/librte_security.so.24.1 00:03:23.149 [258/267] Linking target lib/librte_cmdline.so.24.1 00:03:23.408 [259/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:23.666 [260/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.666 [261/267] Linking target lib/librte_ethdev.so.24.1 00:03:23.924 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:23.924 [263/267] Linking target lib/librte_power.so.24.1 00:03:24.183 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:24.183 [265/267] Linking static target lib/librte_vhost.a 00:03:25.558 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:25.558 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:25.558 INFO: autodetecting backend as ninja 00:03:25.558 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:40.448 CC lib/ut_mock/mock.o 00:03:40.448 CC lib/log/log.o 00:03:40.448 CC lib/log/log_deprecated.o 00:03:40.448 CC lib/log/log_flags.o 00:03:40.448 CC lib/ut/ut.o 00:03:40.448 LIB libspdk_ut_mock.a 00:03:40.448 SO libspdk_ut_mock.so.6.0 00:03:40.448 LIB libspdk_ut.a 00:03:40.448 SO libspdk_ut.so.2.0 00:03:40.448 LIB libspdk_log.a 00:03:40.448 SYMLINK libspdk_ut_mock.so 00:03:40.448 SO libspdk_log.so.7.1 00:03:40.448 SYMLINK libspdk_ut.so 00:03:40.448 SYMLINK libspdk_log.so 00:03:40.448 CC lib/util/base64.o 00:03:40.448 CC lib/util/bit_array.o 00:03:40.448 CC lib/util/cpuset.o 00:03:40.448 CC lib/util/crc32c.o 00:03:40.448 CC lib/util/crc32.o 00:03:40.448 CC lib/util/crc16.o 00:03:40.448 CC lib/ioat/ioat.o 00:03:40.448 CXX lib/trace_parser/trace.o 00:03:40.448 CC lib/dma/dma.o 00:03:40.448 CC lib/vfio_user/host/vfio_user_pci.o 00:03:40.448 CC lib/vfio_user/host/vfio_user.o 00:03:40.448 CC lib/util/crc32_ieee.o 00:03:40.448 CC lib/util/crc64.o 00:03:40.448 CC lib/util/dif.o 00:03:40.448 CC lib/util/fd.o 00:03:40.448 LIB libspdk_dma.a 00:03:40.448 CC lib/util/fd_group.o 00:03:40.448 SO libspdk_dma.so.5.0 00:03:40.448 CC lib/util/file.o 00:03:40.448 CC lib/util/hexlify.o 00:03:40.448 SYMLINK libspdk_dma.so 00:03:40.448 CC lib/util/iov.o 00:03:40.448 LIB libspdk_ioat.a 00:03:40.448 SO libspdk_ioat.so.7.0 00:03:40.448 CC lib/util/math.o 00:03:40.448 CC lib/util/net.o 00:03:40.448 SYMLINK libspdk_ioat.so 00:03:40.448 CC lib/util/pipe.o 00:03:40.448 LIB libspdk_vfio_user.a 00:03:40.448 CC lib/util/strerror_tls.o 00:03:40.448 SO libspdk_vfio_user.so.5.0 00:03:40.448 CC lib/util/string.o 00:03:40.448 SYMLINK libspdk_vfio_user.so 00:03:40.448 CC lib/util/uuid.o 00:03:40.448 CC lib/util/xor.o 00:03:40.448 CC lib/util/zipf.o 00:03:40.448 CC lib/util/md5.o 00:03:40.448 LIB libspdk_util.a 00:03:40.448 SO libspdk_util.so.10.0 00:03:40.448 SYMLINK libspdk_util.so 00:03:40.448 LIB libspdk_trace_parser.a 00:03:40.448 SO libspdk_trace_parser.so.6.0 00:03:40.448 CC lib/env_dpdk/env.o 00:03:40.448 CC lib/rdma_utils/rdma_utils.o 00:03:40.448 CC lib/env_dpdk/memory.o 00:03:40.448 CC lib/env_dpdk/pci.o 00:03:40.448 CC lib/vmd/vmd.o 00:03:40.448 CC lib/rdma_provider/common.o 00:03:40.448 CC lib/json/json_parse.o 00:03:40.448 CC lib/conf/conf.o 00:03:40.448 CC lib/idxd/idxd.o 00:03:40.448 SYMLINK libspdk_trace_parser.so 00:03:40.448 CC lib/idxd/idxd_user.o 00:03:40.448 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:40.448 LIB libspdk_conf.a 00:03:40.448 CC lib/json/json_util.o 00:03:40.448 SO libspdk_conf.so.6.0 00:03:40.448 SYMLINK libspdk_conf.so 00:03:40.448 CC lib/json/json_write.o 00:03:40.448 CC lib/vmd/led.o 00:03:40.448 LIB libspdk_rdma_utils.a 00:03:40.448 LIB libspdk_rdma_provider.a 00:03:40.448 SO libspdk_rdma_utils.so.1.0 00:03:40.449 CC lib/env_dpdk/init.o 00:03:40.449 SO libspdk_rdma_provider.so.6.0 00:03:40.449 SYMLINK libspdk_rdma_utils.so 00:03:40.449 CC lib/env_dpdk/threads.o 00:03:40.449 SYMLINK libspdk_rdma_provider.so 00:03:40.449 CC lib/idxd/idxd_kernel.o 00:03:40.449 CC lib/env_dpdk/pci_ioat.o 00:03:40.449 CC lib/env_dpdk/pci_virtio.o 00:03:40.449 CC lib/env_dpdk/pci_vmd.o 00:03:40.449 CC lib/env_dpdk/pci_idxd.o 00:03:40.449 CC lib/env_dpdk/pci_event.o 00:03:40.449 CC lib/env_dpdk/sigbus_handler.o 00:03:40.449 CC lib/env_dpdk/pci_dpdk.o 00:03:40.449 LIB libspdk_idxd.a 00:03:40.449 LIB libspdk_json.a 00:03:40.449 SO libspdk_idxd.so.12.1 00:03:40.449 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:40.449 SO libspdk_json.so.6.0 00:03:40.449 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:40.449 SYMLINK libspdk_idxd.so 00:03:40.449 SYMLINK libspdk_json.so 00:03:40.449 LIB libspdk_vmd.a 00:03:40.449 SO libspdk_vmd.so.6.0 00:03:40.449 SYMLINK libspdk_vmd.so 00:03:40.449 CC lib/jsonrpc/jsonrpc_server.o 00:03:40.449 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:40.449 CC lib/jsonrpc/jsonrpc_client.o 00:03:40.449 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:40.449 LIB libspdk_jsonrpc.a 00:03:40.449 LIB libspdk_env_dpdk.a 00:03:40.708 SO libspdk_jsonrpc.so.6.0 00:03:40.708 SO libspdk_env_dpdk.so.15.1 00:03:40.708 SYMLINK libspdk_jsonrpc.so 00:03:40.708 SYMLINK libspdk_env_dpdk.so 00:03:40.965 CC lib/rpc/rpc.o 00:03:40.965 LIB libspdk_rpc.a 00:03:40.965 SO libspdk_rpc.so.6.0 00:03:41.223 SYMLINK libspdk_rpc.so 00:03:41.223 CC lib/trace/trace.o 00:03:41.223 CC lib/trace/trace_flags.o 00:03:41.223 CC lib/trace/trace_rpc.o 00:03:41.223 CC lib/keyring/keyring.o 00:03:41.223 CC lib/keyring/keyring_rpc.o 00:03:41.223 CC lib/notify/notify.o 00:03:41.223 CC lib/notify/notify_rpc.o 00:03:41.481 LIB libspdk_notify.a 00:03:41.481 SO libspdk_notify.so.6.0 00:03:41.481 SYMLINK libspdk_notify.so 00:03:41.481 LIB libspdk_keyring.a 00:03:41.481 SO libspdk_keyring.so.2.0 00:03:41.481 LIB libspdk_trace.a 00:03:41.481 SO libspdk_trace.so.11.0 00:03:41.481 SYMLINK libspdk_keyring.so 00:03:41.738 SYMLINK libspdk_trace.so 00:03:41.738 CC lib/thread/thread.o 00:03:41.738 CC lib/thread/iobuf.o 00:03:41.738 CC lib/sock/sock_rpc.o 00:03:41.738 CC lib/sock/sock.o 00:03:42.304 LIB libspdk_sock.a 00:03:42.304 SO libspdk_sock.so.10.0 00:03:42.304 SYMLINK libspdk_sock.so 00:03:42.562 CC lib/nvme/nvme_ctrlr.o 00:03:42.562 CC lib/nvme/nvme_fabric.o 00:03:42.562 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:42.562 CC lib/nvme/nvme_ns.o 00:03:42.562 CC lib/nvme/nvme_ns_cmd.o 00:03:42.562 CC lib/nvme/nvme_pcie_common.o 00:03:42.562 CC lib/nvme/nvme_qpair.o 00:03:42.562 CC lib/nvme/nvme.o 00:03:42.562 CC lib/nvme/nvme_pcie.o 00:03:43.128 CC lib/nvme/nvme_quirks.o 00:03:43.128 CC lib/nvme/nvme_transport.o 00:03:43.128 CC lib/nvme/nvme_discovery.o 00:03:43.386 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:43.386 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:43.386 CC lib/nvme/nvme_tcp.o 00:03:43.386 CC lib/nvme/nvme_opal.o 00:03:43.386 LIB libspdk_thread.a 00:03:43.386 CC lib/nvme/nvme_io_msg.o 00:03:43.386 SO libspdk_thread.so.11.0 00:03:43.386 CC lib/nvme/nvme_poll_group.o 00:03:43.386 SYMLINK libspdk_thread.so 00:03:43.386 CC lib/nvme/nvme_zns.o 00:03:43.644 CC lib/nvme/nvme_stubs.o 00:03:43.644 CC lib/nvme/nvme_auth.o 00:03:43.644 CC lib/nvme/nvme_cuse.o 00:03:43.644 CC lib/nvme/nvme_rdma.o 00:03:43.902 CC lib/accel/accel.o 00:03:43.902 CC lib/blob/blobstore.o 00:03:44.161 CC lib/init/json_config.o 00:03:44.161 CC lib/virtio/virtio.o 00:03:44.161 CC lib/virtio/virtio_vhost_user.o 00:03:44.420 CC lib/init/subsystem.o 00:03:44.420 CC lib/virtio/virtio_vfio_user.o 00:03:44.420 CC lib/virtio/virtio_pci.o 00:03:44.420 CC lib/init/subsystem_rpc.o 00:03:44.679 CC lib/blob/request.o 00:03:44.679 CC lib/accel/accel_rpc.o 00:03:44.679 CC lib/accel/accel_sw.o 00:03:44.679 CC lib/init/rpc.o 00:03:44.679 CC lib/fsdev/fsdev.o 00:03:44.679 LIB libspdk_virtio.a 00:03:44.679 CC lib/fsdev/fsdev_io.o 00:03:44.679 LIB libspdk_init.a 00:03:44.679 SO libspdk_virtio.so.7.0 00:03:44.679 CC lib/blob/zeroes.o 00:03:44.679 SO libspdk_init.so.6.0 00:03:44.939 CC lib/fsdev/fsdev_rpc.o 00:03:44.939 LIB libspdk_nvme.a 00:03:44.939 SYMLINK libspdk_virtio.so 00:03:44.939 CC lib/blob/blob_bs_dev.o 00:03:44.939 SYMLINK libspdk_init.so 00:03:44.939 SO libspdk_nvme.so.14.1 00:03:44.939 CC lib/event/app.o 00:03:44.939 CC lib/event/app_rpc.o 00:03:44.939 CC lib/event/log_rpc.o 00:03:44.939 CC lib/event/reactor.o 00:03:45.197 CC lib/event/scheduler_static.o 00:03:45.197 LIB libspdk_accel.a 00:03:45.197 SYMLINK libspdk_nvme.so 00:03:45.197 SO libspdk_accel.so.16.0 00:03:45.197 LIB libspdk_fsdev.a 00:03:45.197 SO libspdk_fsdev.so.2.0 00:03:45.197 SYMLINK libspdk_accel.so 00:03:45.197 SYMLINK libspdk_fsdev.so 00:03:45.456 CC lib/bdev/part.o 00:03:45.456 CC lib/bdev/scsi_nvme.o 00:03:45.456 CC lib/bdev/bdev.o 00:03:45.456 CC lib/bdev/bdev_rpc.o 00:03:45.456 CC lib/bdev/bdev_zone.o 00:03:45.456 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:45.456 LIB libspdk_event.a 00:03:45.456 SO libspdk_event.so.14.0 00:03:45.714 SYMLINK libspdk_event.so 00:03:45.972 LIB libspdk_fuse_dispatcher.a 00:03:45.972 SO libspdk_fuse_dispatcher.so.1.0 00:03:46.231 SYMLINK libspdk_fuse_dispatcher.so 00:03:47.165 LIB libspdk_blob.a 00:03:47.165 SO libspdk_blob.so.11.0 00:03:47.165 SYMLINK libspdk_blob.so 00:03:47.422 CC lib/blobfs/blobfs.o 00:03:47.422 CC lib/blobfs/tree.o 00:03:47.422 CC lib/lvol/lvol.o 00:03:47.988 LIB libspdk_bdev.a 00:03:47.988 SO libspdk_bdev.so.17.0 00:03:48.246 LIB libspdk_lvol.a 00:03:48.246 SYMLINK libspdk_bdev.so 00:03:48.246 SO libspdk_lvol.so.10.0 00:03:48.246 LIB libspdk_blobfs.a 00:03:48.246 SYMLINK libspdk_lvol.so 00:03:48.246 SO libspdk_blobfs.so.10.0 00:03:48.246 CC lib/ftl/ftl_init.o 00:03:48.246 CC lib/ftl/ftl_core.o 00:03:48.246 CC lib/ftl/ftl_layout.o 00:03:48.246 CC lib/ftl/ftl_debug.o 00:03:48.246 CC lib/scsi/dev.o 00:03:48.246 CC lib/ftl/ftl_io.o 00:03:48.246 CC lib/nbd/nbd.o 00:03:48.246 CC lib/ublk/ublk.o 00:03:48.246 CC lib/nvmf/ctrlr.o 00:03:48.246 SYMLINK libspdk_blobfs.so 00:03:48.246 CC lib/nvmf/ctrlr_discovery.o 00:03:48.513 CC lib/nvmf/ctrlr_bdev.o 00:03:48.513 CC lib/nvmf/subsystem.o 00:03:48.513 CC lib/scsi/lun.o 00:03:48.513 CC lib/scsi/port.o 00:03:48.513 CC lib/scsi/scsi.o 00:03:48.771 CC lib/ftl/ftl_sb.o 00:03:48.771 CC lib/nbd/nbd_rpc.o 00:03:48.771 CC lib/ftl/ftl_l2p.o 00:03:48.771 CC lib/ftl/ftl_l2p_flat.o 00:03:48.771 CC lib/scsi/scsi_bdev.o 00:03:48.771 CC lib/scsi/scsi_pr.o 00:03:48.771 LIB libspdk_nbd.a 00:03:48.771 SO libspdk_nbd.so.7.0 00:03:48.771 CC lib/ftl/ftl_nv_cache.o 00:03:48.771 SYMLINK libspdk_nbd.so 00:03:48.771 CC lib/ublk/ublk_rpc.o 00:03:48.771 CC lib/ftl/ftl_band.o 00:03:49.030 CC lib/ftl/ftl_band_ops.o 00:03:49.030 CC lib/ftl/ftl_writer.o 00:03:49.030 LIB libspdk_ublk.a 00:03:49.030 CC lib/scsi/scsi_rpc.o 00:03:49.030 SO libspdk_ublk.so.3.0 00:03:49.030 CC lib/ftl/ftl_rq.o 00:03:49.030 SYMLINK libspdk_ublk.so 00:03:49.030 CC lib/nvmf/nvmf.o 00:03:49.030 CC lib/ftl/ftl_reloc.o 00:03:49.289 CC lib/ftl/ftl_l2p_cache.o 00:03:49.289 CC lib/scsi/task.o 00:03:49.289 CC lib/ftl/ftl_p2l.o 00:03:49.289 CC lib/ftl/ftl_p2l_log.o 00:03:49.289 CC lib/ftl/mngt/ftl_mngt.o 00:03:49.289 LIB libspdk_scsi.a 00:03:49.547 SO libspdk_scsi.so.9.0 00:03:49.547 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:49.547 SYMLINK libspdk_scsi.so 00:03:49.547 CC lib/nvmf/nvmf_rpc.o 00:03:49.547 CC lib/nvmf/transport.o 00:03:49.547 CC lib/nvmf/tcp.o 00:03:49.547 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:49.806 CC lib/iscsi/conn.o 00:03:49.806 CC lib/iscsi/init_grp.o 00:03:49.806 CC lib/iscsi/iscsi.o 00:03:49.806 CC lib/iscsi/param.o 00:03:49.806 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:50.075 CC lib/nvmf/stubs.o 00:03:50.075 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:50.075 CC lib/iscsi/portal_grp.o 00:03:50.075 CC lib/vhost/vhost.o 00:03:50.075 CC lib/nvmf/mdns_server.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:50.354 CC lib/vhost/vhost_rpc.o 00:03:50.354 CC lib/vhost/vhost_scsi.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:50.354 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:50.611 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:50.611 CC lib/ftl/utils/ftl_conf.o 00:03:50.611 CC lib/ftl/utils/ftl_md.o 00:03:50.611 CC lib/iscsi/tgt_node.o 00:03:50.611 CC lib/iscsi/iscsi_subsystem.o 00:03:50.611 CC lib/nvmf/rdma.o 00:03:50.611 CC lib/nvmf/auth.o 00:03:50.869 CC lib/iscsi/iscsi_rpc.o 00:03:50.869 CC lib/vhost/vhost_blk.o 00:03:50.869 CC lib/ftl/utils/ftl_mempool.o 00:03:50.869 CC lib/iscsi/task.o 00:03:51.128 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.128 CC lib/ftl/utils/ftl_property.o 00:03:51.128 CC lib/vhost/rte_vhost_user.o 00:03:51.128 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.128 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.128 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.128 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.128 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.128 LIB libspdk_iscsi.a 00:03:51.128 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.386 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.386 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.386 SO libspdk_iscsi.so.8.0 00:03:51.386 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.386 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.386 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.386 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:51.386 SYMLINK libspdk_iscsi.so 00:03:51.386 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:51.386 CC lib/ftl/base/ftl_base_dev.o 00:03:51.386 CC lib/ftl/base/ftl_base_bdev.o 00:03:51.386 CC lib/ftl/ftl_trace.o 00:03:51.644 LIB libspdk_ftl.a 00:03:51.903 LIB libspdk_vhost.a 00:03:51.903 SO libspdk_ftl.so.9.0 00:03:51.903 SO libspdk_vhost.so.8.0 00:03:51.903 SYMLINK libspdk_vhost.so 00:03:52.162 SYMLINK libspdk_ftl.so 00:03:52.459 LIB libspdk_nvmf.a 00:03:52.718 SO libspdk_nvmf.so.20.0 00:03:52.718 SYMLINK libspdk_nvmf.so 00:03:52.976 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.235 CC module/fsdev/aio/fsdev_aio.o 00:03:53.235 CC module/keyring/file/keyring.o 00:03:53.235 CC module/sock/posix/posix.o 00:03:53.235 CC module/blob/bdev/blob_bdev.o 00:03:53.235 CC module/accel/error/accel_error.o 00:03:53.235 CC module/scheduler/gscheduler/gscheduler.o 00:03:53.235 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:53.235 CC module/accel/ioat/accel_ioat.o 00:03:53.235 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:53.235 LIB libspdk_env_dpdk_rpc.a 00:03:53.235 SO libspdk_env_dpdk_rpc.so.6.0 00:03:53.235 CC module/accel/error/accel_error_rpc.o 00:03:53.235 CC module/keyring/file/keyring_rpc.o 00:03:53.235 LIB libspdk_scheduler_gscheduler.a 00:03:53.235 LIB libspdk_scheduler_dpdk_governor.a 00:03:53.235 SYMLINK libspdk_env_dpdk_rpc.so 00:03:53.235 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:53.235 SO libspdk_scheduler_gscheduler.so.4.0 00:03:53.235 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:53.235 CC module/accel/ioat/accel_ioat_rpc.o 00:03:53.235 LIB libspdk_scheduler_dynamic.a 00:03:53.235 SO libspdk_scheduler_dynamic.so.4.0 00:03:53.235 SYMLINK libspdk_scheduler_gscheduler.so 00:03:53.235 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:53.235 CC module/fsdev/aio/linux_aio_mgr.o 00:03:53.235 LIB libspdk_keyring_file.a 00:03:53.492 SYMLINK libspdk_scheduler_dynamic.so 00:03:53.492 LIB libspdk_accel_error.a 00:03:53.492 LIB libspdk_blob_bdev.a 00:03:53.492 SO libspdk_keyring_file.so.2.0 00:03:53.492 SO libspdk_blob_bdev.so.11.0 00:03:53.492 SO libspdk_accel_error.so.2.0 00:03:53.492 LIB libspdk_accel_ioat.a 00:03:53.492 SYMLINK libspdk_keyring_file.so 00:03:53.492 SO libspdk_accel_ioat.so.6.0 00:03:53.492 SYMLINK libspdk_blob_bdev.so 00:03:53.492 SYMLINK libspdk_accel_error.so 00:03:53.492 SYMLINK libspdk_accel_ioat.so 00:03:53.492 CC module/accel/dsa/accel_dsa.o 00:03:53.493 CC module/accel/dsa/accel_dsa_rpc.o 00:03:53.493 CC module/keyring/linux/keyring.o 00:03:53.493 CC module/accel/iaa/accel_iaa.o 00:03:53.751 CC module/accel/iaa/accel_iaa_rpc.o 00:03:53.751 CC module/bdev/error/vbdev_error.o 00:03:53.751 CC module/keyring/linux/keyring_rpc.o 00:03:53.751 CC module/bdev/delay/vbdev_delay.o 00:03:53.751 CC module/blobfs/bdev/blobfs_bdev.o 00:03:53.751 CC module/bdev/gpt/gpt.o 00:03:53.751 LIB libspdk_sock_posix.a 00:03:53.751 SO libspdk_sock_posix.so.6.0 00:03:53.751 LIB libspdk_keyring_linux.a 00:03:53.751 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:53.751 LIB libspdk_accel_iaa.a 00:03:53.751 LIB libspdk_accel_dsa.a 00:03:53.751 SO libspdk_keyring_linux.so.1.0 00:03:53.751 SO libspdk_accel_iaa.so.3.0 00:03:53.751 SO libspdk_accel_dsa.so.5.0 00:03:53.751 LIB libspdk_fsdev_aio.a 00:03:53.751 SYMLINK libspdk_sock_posix.so 00:03:53.751 CC module/bdev/gpt/vbdev_gpt.o 00:03:53.751 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:53.751 SO libspdk_fsdev_aio.so.1.0 00:03:53.751 SYMLINK libspdk_keyring_linux.so 00:03:53.751 SYMLINK libspdk_accel_iaa.so 00:03:53.751 SYMLINK libspdk_accel_dsa.so 00:03:53.751 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.009 SYMLINK libspdk_fsdev_aio.so 00:03:54.009 LIB libspdk_bdev_delay.a 00:03:54.009 SO libspdk_bdev_delay.so.6.0 00:03:54.009 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.009 CC module/bdev/malloc/bdev_malloc.o 00:03:54.009 LIB libspdk_blobfs_bdev.a 00:03:54.009 CC module/bdev/null/bdev_null.o 00:03:54.009 LIB libspdk_bdev_error.a 00:03:54.009 SO libspdk_blobfs_bdev.so.6.0 00:03:54.009 LIB libspdk_bdev_gpt.a 00:03:54.009 SO libspdk_bdev_error.so.6.0 00:03:54.009 CC module/bdev/nvme/bdev_nvme.o 00:03:54.009 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.009 SYMLINK libspdk_bdev_delay.so 00:03:54.009 CC module/bdev/null/bdev_null_rpc.o 00:03:54.009 SO libspdk_bdev_gpt.so.6.0 00:03:54.009 SYMLINK libspdk_blobfs_bdev.so 00:03:54.009 SYMLINK libspdk_bdev_error.so 00:03:54.009 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.009 SYMLINK libspdk_bdev_gpt.so 00:03:54.009 CC module/bdev/raid/bdev_raid.o 00:03:54.275 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:54.276 LIB libspdk_bdev_null.a 00:03:54.276 CC module/bdev/split/vbdev_split.o 00:03:54.276 CC module/bdev/nvme/nvme_rpc.o 00:03:54.276 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.276 SO libspdk_bdev_null.so.6.0 00:03:54.276 LIB libspdk_bdev_malloc.a 00:03:54.276 SYMLINK libspdk_bdev_null.so 00:03:54.276 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.276 SO libspdk_bdev_malloc.so.6.0 00:03:54.276 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.276 SYMLINK libspdk_bdev_malloc.so 00:03:54.276 CC module/bdev/nvme/vbdev_opal.o 00:03:54.550 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.550 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.550 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.550 LIB libspdk_bdev_passthru.a 00:03:54.550 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.550 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:54.550 SO libspdk_bdev_passthru.so.6.0 00:03:54.550 SYMLINK libspdk_bdev_passthru.so 00:03:54.550 CC module/bdev/raid/bdev_raid_rpc.o 00:03:54.550 LIB libspdk_bdev_split.a 00:03:54.550 CC module/bdev/raid/bdev_raid_sb.o 00:03:54.550 SO libspdk_bdev_split.so.6.0 00:03:54.550 LIB libspdk_bdev_zone_block.a 00:03:54.550 SO libspdk_bdev_zone_block.so.6.0 00:03:54.550 SYMLINK libspdk_bdev_split.so 00:03:54.550 CC module/bdev/raid/raid0.o 00:03:54.550 CC module/bdev/xnvme/bdev_xnvme.o 00:03:54.808 SYMLINK libspdk_bdev_zone_block.so 00:03:54.808 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:54.808 CC module/bdev/raid/raid1.o 00:03:54.808 CC module/bdev/raid/concat.o 00:03:54.808 CC module/bdev/aio/bdev_aio.o 00:03:54.808 LIB libspdk_bdev_lvol.a 00:03:54.808 SO libspdk_bdev_lvol.so.6.0 00:03:54.808 CC module/bdev/aio/bdev_aio_rpc.o 00:03:54.808 SYMLINK libspdk_bdev_lvol.so 00:03:54.808 CC module/bdev/ftl/bdev_ftl.o 00:03:54.808 LIB libspdk_bdev_xnvme.a 00:03:54.808 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.066 SO libspdk_bdev_xnvme.so.3.0 00:03:55.066 CC module/bdev/iscsi/bdev_iscsi.o 00:03:55.066 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.066 SYMLINK libspdk_bdev_xnvme.so 00:03:55.066 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.066 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.066 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.066 LIB libspdk_bdev_aio.a 00:03:55.066 LIB libspdk_bdev_ftl.a 00:03:55.066 SO libspdk_bdev_aio.so.6.0 00:03:55.066 LIB libspdk_bdev_raid.a 00:03:55.066 SYMLINK libspdk_bdev_aio.so 00:03:55.066 SO libspdk_bdev_ftl.so.6.0 00:03:55.324 SO libspdk_bdev_raid.so.6.0 00:03:55.324 SYMLINK libspdk_bdev_ftl.so 00:03:55.324 LIB libspdk_bdev_iscsi.a 00:03:55.324 SO libspdk_bdev_iscsi.so.6.0 00:03:55.324 SYMLINK libspdk_bdev_raid.so 00:03:55.324 SYMLINK libspdk_bdev_iscsi.so 00:03:55.582 LIB libspdk_bdev_virtio.a 00:03:55.582 SO libspdk_bdev_virtio.so.6.0 00:03:55.582 SYMLINK libspdk_bdev_virtio.so 00:03:56.149 LIB libspdk_bdev_nvme.a 00:03:56.149 SO libspdk_bdev_nvme.so.7.1 00:03:56.406 SYMLINK libspdk_bdev_nvme.so 00:03:56.664 CC module/event/subsystems/scheduler/scheduler.o 00:03:56.664 CC module/event/subsystems/sock/sock.o 00:03:56.664 CC module/event/subsystems/vmd/vmd.o 00:03:56.664 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:56.664 CC module/event/subsystems/iobuf/iobuf.o 00:03:56.664 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:56.664 CC module/event/subsystems/keyring/keyring.o 00:03:56.664 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:56.664 CC module/event/subsystems/fsdev/fsdev.o 00:03:56.924 LIB libspdk_event_keyring.a 00:03:56.924 LIB libspdk_event_scheduler.a 00:03:56.924 LIB libspdk_event_vhost_blk.a 00:03:56.924 LIB libspdk_event_sock.a 00:03:56.924 LIB libspdk_event_vmd.a 00:03:56.924 LIB libspdk_event_iobuf.a 00:03:56.924 LIB libspdk_event_fsdev.a 00:03:56.924 SO libspdk_event_keyring.so.1.0 00:03:56.924 SO libspdk_event_scheduler.so.4.0 00:03:56.924 SO libspdk_event_vhost_blk.so.3.0 00:03:56.924 SO libspdk_event_sock.so.5.0 00:03:56.924 SO libspdk_event_fsdev.so.1.0 00:03:56.924 SO libspdk_event_vmd.so.6.0 00:03:56.924 SO libspdk_event_iobuf.so.3.0 00:03:56.924 SYMLINK libspdk_event_keyring.so 00:03:56.924 SYMLINK libspdk_event_scheduler.so 00:03:56.924 SYMLINK libspdk_event_vhost_blk.so 00:03:56.924 SYMLINK libspdk_event_sock.so 00:03:56.924 SYMLINK libspdk_event_vmd.so 00:03:56.924 SYMLINK libspdk_event_iobuf.so 00:03:56.924 SYMLINK libspdk_event_fsdev.so 00:03:57.186 CC module/event/subsystems/accel/accel.o 00:03:57.186 LIB libspdk_event_accel.a 00:03:57.186 SO libspdk_event_accel.so.6.0 00:03:57.186 SYMLINK libspdk_event_accel.so 00:03:57.447 CC module/event/subsystems/bdev/bdev.o 00:03:57.708 LIB libspdk_event_bdev.a 00:03:57.708 SO libspdk_event_bdev.so.6.0 00:03:57.708 SYMLINK libspdk_event_bdev.so 00:03:57.969 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:57.969 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:57.969 CC module/event/subsystems/ublk/ublk.o 00:03:57.969 CC module/event/subsystems/nbd/nbd.o 00:03:57.969 CC module/event/subsystems/scsi/scsi.o 00:03:57.969 LIB libspdk_event_ublk.a 00:03:57.969 LIB libspdk_event_nbd.a 00:03:57.969 SO libspdk_event_ublk.so.3.0 00:03:57.969 SO libspdk_event_nbd.so.6.0 00:03:57.969 LIB libspdk_event_scsi.a 00:03:57.969 SO libspdk_event_scsi.so.6.0 00:03:57.969 SYMLINK libspdk_event_ublk.so 00:03:57.969 SYMLINK libspdk_event_nbd.so 00:03:57.969 LIB libspdk_event_nvmf.a 00:03:58.227 SYMLINK libspdk_event_scsi.so 00:03:58.227 SO libspdk_event_nvmf.so.6.0 00:03:58.227 SYMLINK libspdk_event_nvmf.so 00:03:58.227 CC module/event/subsystems/iscsi/iscsi.o 00:03:58.227 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.486 LIB libspdk_event_iscsi.a 00:03:58.486 LIB libspdk_event_vhost_scsi.a 00:03:58.486 SO libspdk_event_iscsi.so.6.0 00:03:58.486 SO libspdk_event_vhost_scsi.so.3.0 00:03:58.486 SYMLINK libspdk_event_iscsi.so 00:03:58.486 SYMLINK libspdk_event_vhost_scsi.so 00:03:58.749 SO libspdk.so.6.0 00:03:58.749 SYMLINK libspdk.so 00:03:58.749 CC test/rpc_client/rpc_client_test.o 00:03:58.749 TEST_HEADER include/spdk/accel.h 00:03:58.749 TEST_HEADER include/spdk/accel_module.h 00:03:58.749 TEST_HEADER include/spdk/assert.h 00:03:58.749 CXX app/trace/trace.o 00:03:58.749 TEST_HEADER include/spdk/barrier.h 00:03:58.749 TEST_HEADER include/spdk/base64.h 00:03:58.749 TEST_HEADER include/spdk/bdev.h 00:03:58.749 TEST_HEADER include/spdk/bdev_module.h 00:03:58.749 TEST_HEADER include/spdk/bdev_zone.h 00:03:58.749 TEST_HEADER include/spdk/bit_array.h 00:03:58.749 TEST_HEADER include/spdk/bit_pool.h 00:03:58.749 TEST_HEADER include/spdk/blob_bdev.h 00:03:58.749 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:58.749 TEST_HEADER include/spdk/blobfs.h 00:03:58.749 TEST_HEADER include/spdk/blob.h 00:03:58.749 TEST_HEADER include/spdk/conf.h 00:03:58.749 TEST_HEADER include/spdk/config.h 00:03:58.749 TEST_HEADER include/spdk/cpuset.h 00:03:58.749 TEST_HEADER include/spdk/crc16.h 00:03:58.749 TEST_HEADER include/spdk/crc32.h 00:03:58.749 TEST_HEADER include/spdk/crc64.h 00:03:58.749 TEST_HEADER include/spdk/dif.h 00:03:58.749 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:58.749 TEST_HEADER include/spdk/dma.h 00:03:58.749 TEST_HEADER include/spdk/endian.h 00:03:58.749 TEST_HEADER include/spdk/env_dpdk.h 00:03:58.749 TEST_HEADER include/spdk/env.h 00:03:58.749 TEST_HEADER include/spdk/event.h 00:03:58.749 TEST_HEADER include/spdk/fd_group.h 00:03:58.749 TEST_HEADER include/spdk/fd.h 00:03:58.749 TEST_HEADER include/spdk/file.h 00:03:58.749 TEST_HEADER include/spdk/fsdev.h 00:03:58.749 TEST_HEADER include/spdk/fsdev_module.h 00:03:58.749 TEST_HEADER include/spdk/ftl.h 00:03:58.749 CC examples/util/zipf/zipf.o 00:03:58.749 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:58.749 TEST_HEADER include/spdk/gpt_spec.h 00:03:58.749 TEST_HEADER include/spdk/hexlify.h 00:03:58.749 TEST_HEADER include/spdk/histogram_data.h 00:03:58.749 TEST_HEADER include/spdk/idxd.h 00:03:58.749 CC test/thread/poller_perf/poller_perf.o 00:03:58.749 TEST_HEADER include/spdk/idxd_spec.h 00:03:58.749 TEST_HEADER include/spdk/init.h 00:03:58.749 TEST_HEADER include/spdk/ioat.h 00:03:58.749 TEST_HEADER include/spdk/ioat_spec.h 00:03:58.749 TEST_HEADER include/spdk/iscsi_spec.h 00:03:58.749 TEST_HEADER include/spdk/json.h 00:03:58.749 TEST_HEADER include/spdk/jsonrpc.h 00:03:58.749 TEST_HEADER include/spdk/keyring.h 00:03:58.749 TEST_HEADER include/spdk/keyring_module.h 00:03:58.749 TEST_HEADER include/spdk/likely.h 00:03:58.749 TEST_HEADER include/spdk/log.h 00:03:59.009 TEST_HEADER include/spdk/lvol.h 00:03:59.009 CC examples/ioat/perf/perf.o 00:03:59.009 TEST_HEADER include/spdk/md5.h 00:03:59.009 TEST_HEADER include/spdk/memory.h 00:03:59.009 TEST_HEADER include/spdk/mmio.h 00:03:59.009 TEST_HEADER include/spdk/nbd.h 00:03:59.009 TEST_HEADER include/spdk/net.h 00:03:59.009 TEST_HEADER include/spdk/notify.h 00:03:59.009 TEST_HEADER include/spdk/nvme.h 00:03:59.009 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.009 CC test/dma/test_dma/test_dma.o 00:03:59.009 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.009 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.009 CC test/app/bdev_svc/bdev_svc.o 00:03:59.009 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.009 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.009 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.009 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.009 TEST_HEADER include/spdk/nvmf.h 00:03:59.009 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.009 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.009 TEST_HEADER include/spdk/opal.h 00:03:59.009 TEST_HEADER include/spdk/opal_spec.h 00:03:59.009 TEST_HEADER include/spdk/pci_ids.h 00:03:59.009 TEST_HEADER include/spdk/pipe.h 00:03:59.009 TEST_HEADER include/spdk/queue.h 00:03:59.009 TEST_HEADER include/spdk/reduce.h 00:03:59.009 CC test/env/mem_callbacks/mem_callbacks.o 00:03:59.009 TEST_HEADER include/spdk/rpc.h 00:03:59.009 TEST_HEADER include/spdk/scheduler.h 00:03:59.009 TEST_HEADER include/spdk/scsi.h 00:03:59.009 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.009 TEST_HEADER include/spdk/sock.h 00:03:59.009 TEST_HEADER include/spdk/stdinc.h 00:03:59.009 TEST_HEADER include/spdk/string.h 00:03:59.009 TEST_HEADER include/spdk/thread.h 00:03:59.009 TEST_HEADER include/spdk/trace.h 00:03:59.009 TEST_HEADER include/spdk/trace_parser.h 00:03:59.009 TEST_HEADER include/spdk/tree.h 00:03:59.009 TEST_HEADER include/spdk/ublk.h 00:03:59.009 TEST_HEADER include/spdk/util.h 00:03:59.009 TEST_HEADER include/spdk/uuid.h 00:03:59.009 TEST_HEADER include/spdk/version.h 00:03:59.009 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.009 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.009 TEST_HEADER include/spdk/vhost.h 00:03:59.009 TEST_HEADER include/spdk/vmd.h 00:03:59.009 TEST_HEADER include/spdk/xor.h 00:03:59.009 TEST_HEADER include/spdk/zipf.h 00:03:59.009 CXX test/cpp_headers/accel.o 00:03:59.009 LINK rpc_client_test 00:03:59.009 LINK zipf 00:03:59.009 LINK poller_perf 00:03:59.009 LINK interrupt_tgt 00:03:59.009 LINK bdev_svc 00:03:59.009 CXX test/cpp_headers/accel_module.o 00:03:59.009 LINK ioat_perf 00:03:59.009 CXX test/cpp_headers/assert.o 00:03:59.009 CXX test/cpp_headers/barrier.o 00:03:59.009 LINK spdk_trace 00:03:59.270 CC examples/ioat/verify/verify.o 00:03:59.270 CXX test/cpp_headers/base64.o 00:03:59.270 CC test/env/vtophys/vtophys.o 00:03:59.270 CC test/event/event_perf/event_perf.o 00:03:59.270 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:59.270 CC app/trace_record/trace_record.o 00:03:59.270 CC test/event/reactor/reactor.o 00:03:59.270 CXX test/cpp_headers/bdev.o 00:03:59.270 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:59.270 LINK verify 00:03:59.270 LINK test_dma 00:03:59.270 LINK mem_callbacks 00:03:59.529 LINK vtophys 00:03:59.529 LINK event_perf 00:03:59.529 LINK reactor 00:03:59.529 LINK env_dpdk_post_init 00:03:59.529 CXX test/cpp_headers/bdev_module.o 00:03:59.529 CXX test/cpp_headers/bdev_zone.o 00:03:59.529 LINK spdk_trace_record 00:03:59.529 CC test/env/memory/memory_ut.o 00:03:59.529 CC examples/thread/thread/thread_ex.o 00:03:59.529 CC test/event/reactor_perf/reactor_perf.o 00:03:59.529 CC examples/sock/hello_world/hello_sock.o 00:03:59.790 CC test/accel/dif/dif.o 00:03:59.790 CXX test/cpp_headers/bit_array.o 00:03:59.790 CC app/nvmf_tgt/nvmf_main.o 00:03:59.790 LINK nvme_fuzz 00:03:59.790 CC test/env/pci/pci_ut.o 00:03:59.790 LINK reactor_perf 00:03:59.790 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:59.790 LINK thread 00:03:59.790 CXX test/cpp_headers/bit_pool.o 00:03:59.790 LINK hello_sock 00:03:59.790 LINK nvmf_tgt 00:04:00.058 CC test/app/histogram_perf/histogram_perf.o 00:04:00.058 CC test/event/app_repeat/app_repeat.o 00:04:00.058 CXX test/cpp_headers/blob_bdev.o 00:04:00.058 LINK histogram_perf 00:04:00.058 CC test/event/scheduler/scheduler.o 00:04:00.058 LINK app_repeat 00:04:00.058 CXX test/cpp_headers/blobfs_bdev.o 00:04:00.058 CC examples/vmd/lsvmd/lsvmd.o 00:04:00.058 LINK pci_ut 00:04:00.058 CC app/iscsi_tgt/iscsi_tgt.o 00:04:00.345 LINK lsvmd 00:04:00.345 LINK scheduler 00:04:00.345 CC examples/vmd/led/led.o 00:04:00.345 CXX test/cpp_headers/blobfs.o 00:04:00.345 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.345 LINK iscsi_tgt 00:04:00.345 LINK dif 00:04:00.345 LINK led 00:04:00.345 CXX test/cpp_headers/blob.o 00:04:00.345 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.604 CC examples/idxd/perf/perf.o 00:04:00.604 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:00.604 CXX test/cpp_headers/conf.o 00:04:00.604 CC test/blobfs/mkfs/mkfs.o 00:04:00.604 CC app/spdk_tgt/spdk_tgt.o 00:04:00.604 CC app/spdk_lspci/spdk_lspci.o 00:04:00.604 CC test/app/jsoncat/jsoncat.o 00:04:00.604 CXX test/cpp_headers/config.o 00:04:00.604 LINK memory_ut 00:04:00.604 LINK spdk_lspci 00:04:00.604 CXX test/cpp_headers/cpuset.o 00:04:00.864 LINK jsoncat 00:04:00.864 LINK mkfs 00:04:00.864 LINK spdk_tgt 00:04:00.864 LINK hello_fsdev 00:04:00.864 LINK idxd_perf 00:04:00.864 LINK vhost_fuzz 00:04:00.864 CXX test/cpp_headers/crc16.o 00:04:00.864 CC test/app/stub/stub.o 00:04:01.123 CC test/nvme/aer/aer.o 00:04:01.123 CC test/nvme/reset/reset.o 00:04:01.123 CXX test/cpp_headers/crc32.o 00:04:01.123 CC test/nvme/e2edp/nvme_dp.o 00:04:01.123 CC test/nvme/sgl/sgl.o 00:04:01.123 CC app/spdk_nvme_perf/perf.o 00:04:01.123 CC test/lvol/esnap/esnap.o 00:04:01.123 LINK stub 00:04:01.123 CC examples/accel/perf/accel_perf.o 00:04:01.123 CXX test/cpp_headers/crc64.o 00:04:01.123 LINK sgl 00:04:01.382 LINK reset 00:04:01.382 LINK aer 00:04:01.382 CXX test/cpp_headers/dif.o 00:04:01.382 LINK nvme_dp 00:04:01.382 CC test/nvme/overhead/overhead.o 00:04:01.382 CXX test/cpp_headers/dma.o 00:04:01.382 CXX test/cpp_headers/endian.o 00:04:01.382 CXX test/cpp_headers/env_dpdk.o 00:04:01.382 CXX test/cpp_headers/env.o 00:04:01.382 CXX test/cpp_headers/event.o 00:04:01.382 CC test/nvme/err_injection/err_injection.o 00:04:01.382 LINK iscsi_fuzz 00:04:01.639 LINK accel_perf 00:04:01.639 CC test/bdev/bdevio/bdevio.o 00:04:01.639 LINK overhead 00:04:01.639 CXX test/cpp_headers/fd_group.o 00:04:01.639 CC app/spdk_nvme_identify/identify.o 00:04:01.639 LINK err_injection 00:04:01.639 CC test/nvme/startup/startup.o 00:04:01.639 CC test/nvme/reserve/reserve.o 00:04:01.639 CXX test/cpp_headers/fd.o 00:04:01.899 CXX test/cpp_headers/file.o 00:04:01.899 LINK startup 00:04:01.899 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.899 LINK reserve 00:04:01.899 LINK spdk_nvme_perf 00:04:01.899 CC examples/blob/hello_world/hello_blob.o 00:04:01.899 CXX test/cpp_headers/fsdev.o 00:04:01.899 LINK spdk_nvme_discover 00:04:01.899 CXX test/cpp_headers/fsdev_module.o 00:04:01.899 LINK bdevio 00:04:01.899 CC examples/blob/cli/blobcli.o 00:04:01.899 CC test/nvme/simple_copy/simple_copy.o 00:04:02.196 CC app/spdk_top/spdk_top.o 00:04:02.196 LINK hello_blob 00:04:02.196 CXX test/cpp_headers/ftl.o 00:04:02.196 CXX test/cpp_headers/fuse_dispatcher.o 00:04:02.196 CC app/vhost/vhost.o 00:04:02.196 CC examples/nvme/hello_world/hello_world.o 00:04:02.196 LINK simple_copy 00:04:02.196 LINK vhost 00:04:02.196 CXX test/cpp_headers/gpt_spec.o 00:04:02.457 CC test/nvme/connect_stress/connect_stress.o 00:04:02.457 LINK blobcli 00:04:02.457 CXX test/cpp_headers/hexlify.o 00:04:02.457 CC examples/bdev/hello_world/hello_bdev.o 00:04:02.457 LINK hello_world 00:04:02.457 LINK spdk_nvme_identify 00:04:02.457 CC examples/bdev/bdevperf/bdevperf.o 00:04:02.457 LINK connect_stress 00:04:02.457 CXX test/cpp_headers/histogram_data.o 00:04:02.457 CC app/spdk_dd/spdk_dd.o 00:04:02.457 LINK hello_bdev 00:04:02.717 CC examples/nvme/reconnect/reconnect.o 00:04:02.717 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:02.717 CC app/fio/nvme/fio_plugin.o 00:04:02.717 CXX test/cpp_headers/idxd.o 00:04:02.717 CC test/nvme/boot_partition/boot_partition.o 00:04:02.717 LINK spdk_dd 00:04:02.717 CXX test/cpp_headers/idxd_spec.o 00:04:02.717 CC test/nvme/compliance/nvme_compliance.o 00:04:02.976 LINK boot_partition 00:04:02.976 CXX test/cpp_headers/init.o 00:04:02.976 LINK reconnect 00:04:02.976 LINK spdk_top 00:04:02.976 CC test/nvme/fused_ordering/fused_ordering.o 00:04:02.976 LINK nvme_manage 00:04:02.976 CC examples/nvme/arbitration/arbitration.o 00:04:02.976 CXX test/cpp_headers/ioat.o 00:04:03.235 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:03.235 CC app/fio/bdev/fio_plugin.o 00:04:03.235 LINK nvme_compliance 00:04:03.235 LINK spdk_nvme 00:04:03.235 LINK fused_ordering 00:04:03.236 CXX test/cpp_headers/ioat_spec.o 00:04:03.236 CC examples/nvme/hotplug/hotplug.o 00:04:03.236 CXX test/cpp_headers/iscsi_spec.o 00:04:03.236 LINK arbitration 00:04:03.236 CXX test/cpp_headers/json.o 00:04:03.236 LINK bdevperf 00:04:03.236 LINK doorbell_aers 00:04:03.236 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.236 CC examples/nvme/abort/abort.o 00:04:03.496 CXX test/cpp_headers/jsonrpc.o 00:04:03.496 LINK hotplug 00:04:03.496 CXX test/cpp_headers/keyring.o 00:04:03.496 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.496 CC test/nvme/fdp/fdp.o 00:04:03.496 CC test/nvme/cuse/cuse.o 00:04:03.496 LINK cmb_copy 00:04:03.496 LINK spdk_bdev 00:04:03.496 CXX test/cpp_headers/keyring_module.o 00:04:03.496 CXX test/cpp_headers/likely.o 00:04:03.496 CXX test/cpp_headers/log.o 00:04:03.496 LINK pmr_persistence 00:04:03.496 CXX test/cpp_headers/lvol.o 00:04:03.496 CXX test/cpp_headers/md5.o 00:04:03.755 CXX test/cpp_headers/memory.o 00:04:03.755 CXX test/cpp_headers/mmio.o 00:04:03.755 CXX test/cpp_headers/nbd.o 00:04:03.755 CXX test/cpp_headers/net.o 00:04:03.755 LINK fdp 00:04:03.755 CXX test/cpp_headers/notify.o 00:04:03.755 CXX test/cpp_headers/nvme.o 00:04:03.755 LINK abort 00:04:03.755 CXX test/cpp_headers/nvme_intel.o 00:04:03.755 CXX test/cpp_headers/nvme_ocssd.o 00:04:03.755 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:03.755 CXX test/cpp_headers/nvme_spec.o 00:04:03.755 CXX test/cpp_headers/nvme_zns.o 00:04:03.755 CXX test/cpp_headers/nvmf_cmd.o 00:04:03.755 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:03.755 CXX test/cpp_headers/nvmf.o 00:04:04.014 CXX test/cpp_headers/nvmf_spec.o 00:04:04.014 CXX test/cpp_headers/nvmf_transport.o 00:04:04.014 CXX test/cpp_headers/opal.o 00:04:04.014 CXX test/cpp_headers/opal_spec.o 00:04:04.014 CC examples/nvmf/nvmf/nvmf.o 00:04:04.014 CXX test/cpp_headers/pci_ids.o 00:04:04.014 CXX test/cpp_headers/pipe.o 00:04:04.014 CXX test/cpp_headers/queue.o 00:04:04.014 CXX test/cpp_headers/reduce.o 00:04:04.014 CXX test/cpp_headers/rpc.o 00:04:04.014 CXX test/cpp_headers/scheduler.o 00:04:04.014 CXX test/cpp_headers/scsi.o 00:04:04.014 CXX test/cpp_headers/scsi_spec.o 00:04:04.014 CXX test/cpp_headers/sock.o 00:04:04.272 CXX test/cpp_headers/stdinc.o 00:04:04.272 CXX test/cpp_headers/string.o 00:04:04.272 CXX test/cpp_headers/thread.o 00:04:04.272 CXX test/cpp_headers/trace.o 00:04:04.272 CXX test/cpp_headers/trace_parser.o 00:04:04.272 CXX test/cpp_headers/tree.o 00:04:04.272 CXX test/cpp_headers/ublk.o 00:04:04.272 CXX test/cpp_headers/util.o 00:04:04.272 LINK nvmf 00:04:04.272 CXX test/cpp_headers/uuid.o 00:04:04.272 CXX test/cpp_headers/version.o 00:04:04.272 CXX test/cpp_headers/vfio_user_pci.o 00:04:04.272 CXX test/cpp_headers/vfio_user_spec.o 00:04:04.272 CXX test/cpp_headers/vhost.o 00:04:04.272 CXX test/cpp_headers/vmd.o 00:04:04.272 CXX test/cpp_headers/xor.o 00:04:04.272 CXX test/cpp_headers/zipf.o 00:04:04.842 LINK cuse 00:04:06.744 LINK esnap 00:04:06.744 00:04:06.744 real 1m5.487s 00:04:06.744 user 6m4.579s 00:04:06.744 sys 1m2.390s 00:04:06.744 ************************************ 00:04:06.744 END TEST make 00:04:06.744 ************************************ 00:04:06.744 11:20:05 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:06.744 11:20:05 make -- common/autotest_common.sh@10 -- $ set +x 00:04:06.744 11:20:06 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:06.744 11:20:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:06.744 11:20:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:06.744 11:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.744 11:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:06.744 11:20:06 -- pm/common@44 -- $ pid=5066 00:04:06.744 11:20:06 -- pm/common@50 -- $ kill -TERM 5066 00:04:06.744 11:20:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.744 11:20:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:06.744 11:20:06 -- pm/common@44 -- $ pid=5068 00:04:06.744 11:20:06 -- pm/common@50 -- $ kill -TERM 5068 00:04:06.744 11:20:06 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:06.744 11:20:06 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:07.002 11:20:06 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:07.002 11:20:06 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:07.002 11:20:06 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:07.002 11:20:06 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:07.002 11:20:06 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.002 11:20:06 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.002 11:20:06 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.002 11:20:06 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.002 11:20:06 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.002 11:20:06 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.002 11:20:06 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.002 11:20:06 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.002 11:20:06 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.002 11:20:06 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.002 11:20:06 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.002 11:20:06 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.002 11:20:06 -- scripts/common.sh@345 -- # : 1 00:04:07.002 11:20:06 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.002 11:20:06 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.002 11:20:06 -- scripts/common.sh@365 -- # decimal 1 00:04:07.002 11:20:06 -- scripts/common.sh@353 -- # local d=1 00:04:07.002 11:20:06 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.002 11:20:06 -- scripts/common.sh@355 -- # echo 1 00:04:07.002 11:20:06 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.002 11:20:06 -- scripts/common.sh@366 -- # decimal 2 00:04:07.002 11:20:06 -- scripts/common.sh@353 -- # local d=2 00:04:07.002 11:20:06 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.002 11:20:06 -- scripts/common.sh@355 -- # echo 2 00:04:07.002 11:20:06 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.002 11:20:06 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.002 11:20:06 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.002 11:20:06 -- scripts/common.sh@368 -- # return 0 00:04:07.002 11:20:06 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.002 11:20:06 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.002 --rc genhtml_branch_coverage=1 00:04:07.002 --rc genhtml_function_coverage=1 00:04:07.002 --rc genhtml_legend=1 00:04:07.002 --rc geninfo_all_blocks=1 00:04:07.002 --rc geninfo_unexecuted_blocks=1 00:04:07.002 00:04:07.002 ' 00:04:07.002 11:20:06 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.002 --rc genhtml_branch_coverage=1 00:04:07.002 --rc genhtml_function_coverage=1 00:04:07.002 --rc genhtml_legend=1 00:04:07.002 --rc geninfo_all_blocks=1 00:04:07.002 --rc geninfo_unexecuted_blocks=1 00:04:07.002 00:04:07.002 ' 00:04:07.002 11:20:06 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.002 --rc genhtml_branch_coverage=1 00:04:07.002 --rc genhtml_function_coverage=1 00:04:07.002 --rc genhtml_legend=1 00:04:07.002 --rc geninfo_all_blocks=1 00:04:07.002 --rc geninfo_unexecuted_blocks=1 00:04:07.002 00:04:07.002 ' 00:04:07.002 11:20:06 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:07.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.002 --rc genhtml_branch_coverage=1 00:04:07.002 --rc genhtml_function_coverage=1 00:04:07.002 --rc genhtml_legend=1 00:04:07.002 --rc geninfo_all_blocks=1 00:04:07.002 --rc geninfo_unexecuted_blocks=1 00:04:07.002 00:04:07.002 ' 00:04:07.002 11:20:06 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.002 11:20:06 -- nvmf/common.sh@7 -- # uname -s 00:04:07.002 11:20:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.002 11:20:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.002 11:20:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.002 11:20:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.002 11:20:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.002 11:20:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.002 11:20:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.002 11:20:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.002 11:20:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.002 11:20:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.002 11:20:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:04:07.002 11:20:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:04:07.002 11:20:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.002 11:20:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.002 11:20:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.003 11:20:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.003 11:20:06 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.003 11:20:06 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.003 11:20:06 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.003 11:20:06 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.003 11:20:06 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.003 11:20:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.003 11:20:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.003 11:20:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.003 11:20:06 -- paths/export.sh@5 -- # export PATH 00:04:07.003 11:20:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.003 11:20:06 -- nvmf/common.sh@51 -- # : 0 00:04:07.003 11:20:06 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.003 11:20:06 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.003 11:20:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.003 11:20:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.003 11:20:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.003 11:20:06 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.003 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.003 11:20:06 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.003 11:20:06 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.003 11:20:06 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.003 11:20:06 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.003 11:20:06 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.003 11:20:06 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.003 11:20:06 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.003 11:20:06 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.003 11:20:06 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.003 11:20:06 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:07.003 11:20:06 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.003 11:20:06 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.003 11:20:06 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.003 11:20:06 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.003 11:20:06 -- spdk/autotest.sh@48 -- # udevadm_pid=54192 00:04:07.003 11:20:06 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.003 11:20:06 -- pm/common@17 -- # local monitor 00:04:07.003 11:20:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.003 11:20:06 -- pm/common@21 -- # date +%s 00:04:07.003 11:20:06 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.003 11:20:06 -- pm/common@25 -- # sleep 1 00:04:07.003 11:20:06 -- pm/common@21 -- # date +%s 00:04:07.003 11:20:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730805606 00:04:07.003 11:20:06 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730805606 00:04:07.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730805606_collect-vmstat.pm.log 00:04:07.003 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730805606_collect-cpu-load.pm.log 00:04:07.936 11:20:07 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:07.936 11:20:07 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:07.936 11:20:07 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:07.936 11:20:07 -- common/autotest_common.sh@10 -- # set +x 00:04:07.936 11:20:07 -- spdk/autotest.sh@59 -- # create_test_list 00:04:07.936 11:20:07 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:07.936 11:20:07 -- common/autotest_common.sh@10 -- # set +x 00:04:08.194 11:20:07 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:08.194 11:20:07 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:08.194 11:20:07 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:08.194 11:20:07 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.194 11:20:07 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:08.194 11:20:07 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.194 11:20:07 -- common/autotest_common.sh@1455 -- # uname 00:04:08.194 11:20:07 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:08.194 11:20:07 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.194 11:20:07 -- common/autotest_common.sh@1475 -- # uname 00:04:08.194 11:20:07 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:08.194 11:20:07 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.194 11:20:07 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.194 lcov: LCOV version 1.15 00:04:08.194 11:20:07 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:23.075 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:23.075 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:35.269 11:20:34 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:35.269 11:20:34 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:35.269 11:20:34 -- common/autotest_common.sh@10 -- # set +x 00:04:35.269 11:20:34 -- spdk/autotest.sh@78 -- # rm -f 00:04:35.269 11:20:34 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:35.526 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:35.784 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:35.784 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:35.784 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:36.042 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:36.042 11:20:35 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:36.042 11:20:35 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:36.042 11:20:35 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:36.042 11:20:35 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:36.042 11:20:35 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:36.042 11:20:35 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:36.042 11:20:35 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:36.042 11:20:35 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:36.042 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.042 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.042 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:36.042 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:36.042 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.042 No valid GPT data, bailing 00:04:36.042 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.042 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.042 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.042 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.042 1+0 records in 00:04:36.042 1+0 records out 00:04:36.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0179573 s, 58.4 MB/s 00:04:36.042 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.042 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.042 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:36.042 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:36.042 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:36.042 No valid GPT data, bailing 00:04:36.043 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:36.043 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.043 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.043 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:36.043 1+0 records in 00:04:36.043 1+0 records out 00:04:36.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00421506 s, 249 MB/s 00:04:36.043 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.043 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.043 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:36.043 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:36.043 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:36.043 No valid GPT data, bailing 00:04:36.043 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:36.043 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.043 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.043 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:36.043 1+0 records in 00:04:36.043 1+0 records out 00:04:36.043 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00423931 s, 247 MB/s 00:04:36.043 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.043 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.043 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:36.043 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:36.043 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:36.043 No valid GPT data, bailing 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.301 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.301 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:36.301 1+0 records in 00:04:36.301 1+0 records out 00:04:36.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00444189 s, 236 MB/s 00:04:36.301 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.301 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.301 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:36.301 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:36.301 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:36.301 No valid GPT data, bailing 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.301 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.301 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:36.301 1+0 records in 00:04:36.301 1+0 records out 00:04:36.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00288485 s, 363 MB/s 00:04:36.301 11:20:35 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:36.301 11:20:35 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:36.301 11:20:35 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:36.301 11:20:35 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:36.301 11:20:35 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:36.301 No valid GPT data, bailing 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:36.301 11:20:35 -- scripts/common.sh@394 -- # pt= 00:04:36.301 11:20:35 -- scripts/common.sh@395 -- # return 1 00:04:36.301 11:20:35 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:36.301 1+0 records in 00:04:36.301 1+0 records out 00:04:36.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434212 s, 241 MB/s 00:04:36.301 11:20:35 -- spdk/autotest.sh@105 -- # sync 00:04:36.301 11:20:35 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.301 11:20:35 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.301 11:20:35 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:38.199 11:20:37 -- spdk/autotest.sh@111 -- # uname -s 00:04:38.199 11:20:37 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:38.199 11:20:37 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:38.199 11:20:37 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:38.457 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.020 Hugepages 00:04:39.020 node hugesize free / total 00:04:39.020 node0 1048576kB 0 / 0 00:04:39.020 node0 2048kB 0 / 0 00:04:39.020 00:04:39.020 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:39.020 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:39.020 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:39.020 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:39.277 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:39.277 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:39.277 11:20:38 -- spdk/autotest.sh@117 -- # uname -s 00:04:39.277 11:20:38 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:39.277 11:20:38 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:39.277 11:20:38 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:39.841 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:40.409 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.409 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.409 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.409 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:40.409 11:20:39 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:41.342 11:20:40 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:41.342 11:20:40 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:41.342 11:20:40 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:41.342 11:20:40 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:41.342 11:20:40 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:41.342 11:20:40 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:41.342 11:20:40 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:41.342 11:20:40 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:41.342 11:20:40 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:41.342 11:20:40 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:41.342 11:20:40 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:41.342 11:20:40 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:41.907 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:41.907 Waiting for block devices as requested 00:04:41.907 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.167 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.167 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:42.167 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:47.430 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:47.430 11:20:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:47.430 11:20:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:47.430 11:20:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:47.430 11:20:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.430 11:20:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.430 11:20:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:47.430 11:20:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:47.430 11:20:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:47.431 11:20:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:47.431 11:20:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1541 -- # continue 00:04:47.431 11:20:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:47.431 11:20:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1541 -- # continue 00:04:47.431 11:20:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:47.431 11:20:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1541 -- # continue 00:04:47.431 11:20:46 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:47.431 11:20:46 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:47.431 11:20:46 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:47.431 11:20:46 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:47.431 11:20:46 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:47.431 11:20:46 -- common/autotest_common.sh@1541 -- # continue 00:04:47.431 11:20:46 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:47.431 11:20:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:47.431 11:20:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.431 11:20:46 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:47.431 11:20:46 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.431 11:20:46 -- common/autotest_common.sh@10 -- # set +x 00:04:47.431 11:20:46 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:47.997 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.563 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.563 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.563 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.563 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:48.563 11:20:47 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:48.563 11:20:47 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:48.563 11:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:48.563 11:20:47 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:48.563 11:20:47 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:48.563 11:20:47 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:48.563 11:20:47 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:48.563 11:20:47 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:48.563 11:20:47 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:48.563 11:20:47 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:48.563 11:20:47 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:48.563 11:20:47 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:48.563 11:20:47 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:48.563 11:20:47 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.564 11:20:47 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:48.564 11:20:47 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:48.564 11:20:47 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:48.564 11:20:47 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:48.564 11:20:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:48.564 11:20:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.564 11:20:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:48.564 11:20:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.564 11:20:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:48.564 11:20:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.564 11:20:47 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:48.564 11:20:47 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:48.564 11:20:47 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:48.564 11:20:47 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:48.564 11:20:47 -- common/autotest_common.sh@1570 -- # return 0 00:04:48.564 11:20:47 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:48.564 11:20:47 -- common/autotest_common.sh@1578 -- # return 0 00:04:48.564 11:20:47 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:48.564 11:20:47 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:48.564 11:20:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.564 11:20:47 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.564 11:20:47 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:48.564 11:20:47 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:48.564 11:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:48.564 11:20:47 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:48.564 11:20:47 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:48.564 11:20:47 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.564 11:20:47 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.564 11:20:47 -- common/autotest_common.sh@10 -- # set +x 00:04:48.822 ************************************ 00:04:48.822 START TEST env 00:04:48.822 ************************************ 00:04:48.822 11:20:47 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:48.822 * Looking for test storage... 00:04:48.822 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:48.822 11:20:47 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:48.822 11:20:47 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:48.822 11:20:47 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:48.822 11:20:47 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:48.822 11:20:47 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.822 11:20:47 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.822 11:20:47 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.822 11:20:47 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.822 11:20:47 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.822 11:20:47 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.822 11:20:47 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.822 11:20:47 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.822 11:20:47 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.822 11:20:47 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.822 11:20:47 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.822 11:20:47 env -- scripts/common.sh@344 -- # case "$op" in 00:04:48.822 11:20:47 env -- scripts/common.sh@345 -- # : 1 00:04:48.822 11:20:47 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.822 11:20:47 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.822 11:20:47 env -- scripts/common.sh@365 -- # decimal 1 00:04:48.822 11:20:48 env -- scripts/common.sh@353 -- # local d=1 00:04:48.822 11:20:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.822 11:20:48 env -- scripts/common.sh@355 -- # echo 1 00:04:48.822 11:20:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.822 11:20:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:48.822 11:20:48 env -- scripts/common.sh@353 -- # local d=2 00:04:48.822 11:20:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.822 11:20:48 env -- scripts/common.sh@355 -- # echo 2 00:04:48.822 11:20:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.822 11:20:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.822 11:20:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.822 11:20:48 env -- scripts/common.sh@368 -- # return 0 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:48.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.822 --rc genhtml_branch_coverage=1 00:04:48.822 --rc genhtml_function_coverage=1 00:04:48.822 --rc genhtml_legend=1 00:04:48.822 --rc geninfo_all_blocks=1 00:04:48.822 --rc geninfo_unexecuted_blocks=1 00:04:48.822 00:04:48.822 ' 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:48.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.822 --rc genhtml_branch_coverage=1 00:04:48.822 --rc genhtml_function_coverage=1 00:04:48.822 --rc genhtml_legend=1 00:04:48.822 --rc geninfo_all_blocks=1 00:04:48.822 --rc geninfo_unexecuted_blocks=1 00:04:48.822 00:04:48.822 ' 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:48.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.822 --rc genhtml_branch_coverage=1 00:04:48.822 --rc genhtml_function_coverage=1 00:04:48.822 --rc genhtml_legend=1 00:04:48.822 --rc geninfo_all_blocks=1 00:04:48.822 --rc geninfo_unexecuted_blocks=1 00:04:48.822 00:04:48.822 ' 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:48.822 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.822 --rc genhtml_branch_coverage=1 00:04:48.822 --rc genhtml_function_coverage=1 00:04:48.822 --rc genhtml_legend=1 00:04:48.822 --rc geninfo_all_blocks=1 00:04:48.822 --rc geninfo_unexecuted_blocks=1 00:04:48.822 00:04:48.822 ' 00:04:48.822 11:20:48 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:48.822 11:20:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:48.822 11:20:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.822 ************************************ 00:04:48.822 START TEST env_memory 00:04:48.822 ************************************ 00:04:48.822 11:20:48 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:48.822 00:04:48.822 00:04:48.822 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.822 http://cunit.sourceforge.net/ 00:04:48.822 00:04:48.822 00:04:48.822 Suite: memory 00:04:48.822 Test: alloc and free memory map ...[2024-11-05 11:20:48.076932] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.080 passed 00:04:49.080 Test: mem map translation ...[2024-11-05 11:20:48.115947] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.080 [2024-11-05 11:20:48.116058] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.080 [2024-11-05 11:20:48.116165] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.080 [2024-11-05 11:20:48.116204] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.080 passed 00:04:49.080 Test: mem map registration ...[2024-11-05 11:20:48.184946] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.080 [2024-11-05 11:20:48.185081] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:49.080 passed 00:04:49.080 Test: mem map adjacent registrations ...passed 00:04:49.080 00:04:49.080 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.080 suites 1 1 n/a 0 0 00:04:49.080 tests 4 4 4 0 0 00:04:49.080 asserts 152 152 152 0 n/a 00:04:49.080 00:04:49.080 Elapsed time = 0.235 seconds 00:04:49.080 00:04:49.080 real 0m0.261s 00:04:49.080 user 0m0.238s 00:04:49.080 sys 0m0.016s 00:04:49.080 11:20:48 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:49.080 11:20:48 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.080 ************************************ 00:04:49.080 END TEST env_memory 00:04:49.080 ************************************ 00:04:49.080 11:20:48 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.080 11:20:48 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:49.080 11:20:48 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:49.080 11:20:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.080 ************************************ 00:04:49.080 START TEST env_vtophys 00:04:49.080 ************************************ 00:04:49.080 11:20:48 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:49.353 EAL: lib.eal log level changed from notice to debug 00:04:49.353 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 1 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 2 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 3 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 4 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 5 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 6 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 7 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 8 as core 0 on socket 0 00:04:49.353 EAL: Detected lcore 9 as core 0 on socket 0 00:04:49.354 EAL: Maximum logical cores by configuration: 128 00:04:49.354 EAL: Detected CPU lcores: 10 00:04:49.354 EAL: Detected NUMA nodes: 1 00:04:49.354 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:49.354 EAL: Detected shared linkage of DPDK 00:04:49.354 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.354 EAL: Selected IOVA mode 'PA' 00:04:49.354 EAL: Probing VFIO support... 00:04:49.354 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.354 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:49.354 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.354 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.354 EAL: Setting up physically contiguous memory... 00:04:49.354 EAL: Setting maximum number of open files to 524288 00:04:49.354 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.354 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.354 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.354 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.354 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.354 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.354 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.354 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.354 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.354 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.354 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.354 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.354 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.354 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.354 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.354 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.354 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.354 EAL: Hugepages will be freed exactly as allocated. 00:04:49.354 EAL: No shared files mode enabled, IPC is disabled 00:04:49.354 EAL: No shared files mode enabled, IPC is disabled 00:04:49.354 EAL: TSC frequency is ~2600000 KHz 00:04:49.354 EAL: Main lcore 0 is ready (tid=7fb39a011a40;cpuset=[0]) 00:04:49.354 EAL: Trying to obtain current memory policy. 00:04:49.354 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.354 EAL: Restoring previous memory policy: 0 00:04:49.354 EAL: request: mp_malloc_sync 00:04:49.354 EAL: No shared files mode enabled, IPC is disabled 00:04:49.354 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.354 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:49.354 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.354 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.354 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:49.354 00:04:49.354 00:04:49.354 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.354 http://cunit.sourceforge.net/ 00:04:49.354 00:04:49.354 00:04:49.354 Suite: components_suite 00:04:49.632 Test: vtophys_malloc_test ...passed 00:04:49.632 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.632 EAL: Restoring previous memory policy: 4 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.632 EAL: Trying to obtain current memory policy. 00:04:49.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.632 EAL: Restoring previous memory policy: 4 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.632 EAL: Trying to obtain current memory policy. 00:04:49.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.632 EAL: Restoring previous memory policy: 4 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.632 EAL: Trying to obtain current memory policy. 00:04:49.632 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.632 EAL: Restoring previous memory policy: 4 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.632 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.632 EAL: request: mp_malloc_sync 00:04:49.632 EAL: No shared files mode enabled, IPC is disabled 00:04:49.632 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.889 EAL: Trying to obtain current memory policy. 00:04:49.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.889 EAL: Restoring previous memory policy: 4 00:04:49.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.889 EAL: request: mp_malloc_sync 00:04:49.889 EAL: No shared files mode enabled, IPC is disabled 00:04:49.889 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.889 EAL: request: mp_malloc_sync 00:04:49.889 EAL: No shared files mode enabled, IPC is disabled 00:04:49.889 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.889 EAL: Trying to obtain current memory policy. 00:04:49.889 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.889 EAL: Restoring previous memory policy: 4 00:04:49.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.889 EAL: request: mp_malloc_sync 00:04:49.889 EAL: No shared files mode enabled, IPC is disabled 00:04:49.889 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.889 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.889 EAL: request: mp_malloc_sync 00:04:49.889 EAL: No shared files mode enabled, IPC is disabled 00:04:49.890 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.890 EAL: Trying to obtain current memory policy. 00:04:49.890 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.890 EAL: Restoring previous memory policy: 4 00:04:49.890 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.890 EAL: request: mp_malloc_sync 00:04:49.890 EAL: No shared files mode enabled, IPC is disabled 00:04:49.890 EAL: Heap on socket 0 was expanded by 130MB 00:04:50.147 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.147 EAL: request: mp_malloc_sync 00:04:50.147 EAL: No shared files mode enabled, IPC is disabled 00:04:50.147 EAL: Heap on socket 0 was shrunk by 130MB 00:04:50.405 EAL: Trying to obtain current memory policy. 00:04:50.405 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.405 EAL: Restoring previous memory policy: 4 00:04:50.405 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.405 EAL: request: mp_malloc_sync 00:04:50.405 EAL: No shared files mode enabled, IPC is disabled 00:04:50.405 EAL: Heap on socket 0 was expanded by 258MB 00:04:50.664 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.664 EAL: request: mp_malloc_sync 00:04:50.664 EAL: No shared files mode enabled, IPC is disabled 00:04:50.664 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.923 EAL: Trying to obtain current memory policy. 00:04:50.923 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.923 EAL: Restoring previous memory policy: 4 00:04:50.923 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.923 EAL: request: mp_malloc_sync 00:04:50.923 EAL: No shared files mode enabled, IPC is disabled 00:04:50.923 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.494 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.755 EAL: request: mp_malloc_sync 00:04:51.755 EAL: No shared files mode enabled, IPC is disabled 00:04:51.755 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.327 EAL: Trying to obtain current memory policy. 00:04:52.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.327 EAL: Restoring previous memory policy: 4 00:04:52.327 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.327 EAL: request: mp_malloc_sync 00:04:52.327 EAL: No shared files mode enabled, IPC is disabled 00:04:52.327 EAL: Heap on socket 0 was expanded by 1026MB 00:04:53.750 EAL: Calling mem event callback 'spdk:(nil)' 00:04:53.750 EAL: request: mp_malloc_sync 00:04:53.750 EAL: No shared files mode enabled, IPC is disabled 00:04:53.750 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:54.693 passed 00:04:54.693 00:04:54.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.693 suites 1 1 n/a 0 0 00:04:54.693 tests 2 2 2 0 0 00:04:54.693 asserts 5859 5859 5859 0 n/a 00:04:54.693 00:04:54.693 Elapsed time = 5.278 seconds 00:04:54.693 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.693 EAL: request: mp_malloc_sync 00:04:54.693 EAL: No shared files mode enabled, IPC is disabled 00:04:54.693 EAL: Heap on socket 0 was shrunk by 2MB 00:04:54.693 EAL: No shared files mode enabled, IPC is disabled 00:04:54.693 EAL: No shared files mode enabled, IPC is disabled 00:04:54.693 EAL: No shared files mode enabled, IPC is disabled 00:04:54.693 00:04:54.693 real 0m5.577s 00:04:54.693 user 0m4.724s 00:04:54.693 sys 0m0.678s 00:04:54.693 ************************************ 00:04:54.693 END TEST env_vtophys 00:04:54.693 ************************************ 00:04:54.693 11:20:53 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.693 11:20:53 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:54.956 11:20:53 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.956 11:20:53 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:54.956 11:20:53 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.956 11:20:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.956 ************************************ 00:04:54.956 START TEST env_pci 00:04:54.956 ************************************ 00:04:54.956 11:20:53 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:54.956 00:04:54.956 00:04:54.956 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.956 http://cunit.sourceforge.net/ 00:04:54.956 00:04:54.956 00:04:54.956 Suite: pci 00:04:54.956 Test: pci_hook ...[2024-11-05 11:20:54.017750] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56955 has claimed it 00:04:54.956 passed 00:04:54.956 00:04:54.956 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.956 suites 1 1 n/a 0 0 00:04:54.956 tests 1 1 1 0 0 00:04:54.956 asserts 25 25 25 0 n/a 00:04:54.956 00:04:54.956 Elapsed time = 0.007 seconds 00:04:54.956 EAL: Cannot find device (10000:00:01.0) 00:04:54.956 EAL: Failed to attach device on primary process 00:04:54.956 ************************************ 00:04:54.956 END TEST env_pci 00:04:54.956 ************************************ 00:04:54.956 00:04:54.956 real 0m0.064s 00:04:54.956 user 0m0.024s 00:04:54.956 sys 0m0.039s 00:04:54.956 11:20:54 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:54.956 11:20:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:54.956 11:20:54 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:54.956 11:20:54 env -- env/env.sh@15 -- # uname 00:04:54.956 11:20:54 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:54.956 11:20:54 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:54.956 11:20:54 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.956 11:20:54 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:04:54.956 11:20:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:54.956 11:20:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.956 ************************************ 00:04:54.956 START TEST env_dpdk_post_init 00:04:54.956 ************************************ 00:04:54.956 11:20:54 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:54.956 EAL: Detected CPU lcores: 10 00:04:54.956 EAL: Detected NUMA nodes: 1 00:04:54.956 EAL: Detected shared linkage of DPDK 00:04:54.956 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.956 EAL: Selected IOVA mode 'PA' 00:04:55.218 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.218 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:55.218 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:55.218 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:55.218 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:55.218 Starting DPDK initialization... 00:04:55.218 Starting SPDK post initialization... 00:04:55.218 SPDK NVMe probe 00:04:55.218 Attaching to 0000:00:10.0 00:04:55.218 Attaching to 0000:00:11.0 00:04:55.218 Attaching to 0000:00:12.0 00:04:55.218 Attaching to 0000:00:13.0 00:04:55.218 Attached to 0000:00:11.0 00:04:55.218 Attached to 0000:00:13.0 00:04:55.218 Attached to 0000:00:10.0 00:04:55.218 Attached to 0000:00:12.0 00:04:55.218 Cleaning up... 00:04:55.218 00:04:55.218 real 0m0.265s 00:04:55.218 user 0m0.086s 00:04:55.218 sys 0m0.081s 00:04:55.218 11:20:54 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.218 11:20:54 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.218 ************************************ 00:04:55.218 END TEST env_dpdk_post_init 00:04:55.218 ************************************ 00:04:55.218 11:20:54 env -- env/env.sh@26 -- # uname 00:04:55.218 11:20:54 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:55.218 11:20:54 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.218 11:20:54 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.218 11:20:54 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.218 11:20:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.218 ************************************ 00:04:55.218 START TEST env_mem_callbacks 00:04:55.218 ************************************ 00:04:55.218 11:20:54 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:55.218 EAL: Detected CPU lcores: 10 00:04:55.218 EAL: Detected NUMA nodes: 1 00:04:55.218 EAL: Detected shared linkage of DPDK 00:04:55.478 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:55.478 EAL: Selected IOVA mode 'PA' 00:04:55.478 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:55.478 00:04:55.478 00:04:55.478 CUnit - A unit testing framework for C - Version 2.1-3 00:04:55.478 http://cunit.sourceforge.net/ 00:04:55.478 00:04:55.478 00:04:55.478 Suite: memory 00:04:55.478 Test: test ... 00:04:55.478 register 0x200000200000 2097152 00:04:55.478 malloc 3145728 00:04:55.478 register 0x200000400000 4194304 00:04:55.478 buf 0x2000004fffc0 len 3145728 PASSED 00:04:55.478 malloc 64 00:04:55.478 buf 0x2000004ffec0 len 64 PASSED 00:04:55.478 malloc 4194304 00:04:55.478 register 0x200000800000 6291456 00:04:55.478 buf 0x2000009fffc0 len 4194304 PASSED 00:04:55.478 free 0x2000004fffc0 3145728 00:04:55.478 free 0x2000004ffec0 64 00:04:55.478 unregister 0x200000400000 4194304 PASSED 00:04:55.478 free 0x2000009fffc0 4194304 00:04:55.478 unregister 0x200000800000 6291456 PASSED 00:04:55.478 malloc 8388608 00:04:55.478 register 0x200000400000 10485760 00:04:55.478 buf 0x2000005fffc0 len 8388608 PASSED 00:04:55.478 free 0x2000005fffc0 8388608 00:04:55.478 unregister 0x200000400000 10485760 PASSED 00:04:55.478 passed 00:04:55.478 00:04:55.478 Run Summary: Type Total Ran Passed Failed Inactive 00:04:55.478 suites 1 1 n/a 0 0 00:04:55.478 tests 1 1 1 0 0 00:04:55.478 asserts 15 15 15 0 n/a 00:04:55.478 00:04:55.478 Elapsed time = 0.042 seconds 00:04:55.478 00:04:55.478 real 0m0.211s 00:04:55.478 user 0m0.061s 00:04:55.478 sys 0m0.047s 00:04:55.478 11:20:54 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.478 11:20:54 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:55.478 ************************************ 00:04:55.478 END TEST env_mem_callbacks 00:04:55.478 ************************************ 00:04:55.478 00:04:55.478 real 0m6.857s 00:04:55.478 user 0m5.302s 00:04:55.478 sys 0m1.070s 00:04:55.478 11:20:54 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:55.478 ************************************ 00:04:55.478 11:20:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:55.478 END TEST env 00:04:55.478 ************************************ 00:04:55.739 11:20:54 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.739 11:20:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:55.739 11:20:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:55.739 11:20:54 -- common/autotest_common.sh@10 -- # set +x 00:04:55.739 ************************************ 00:04:55.739 START TEST rpc 00:04:55.739 ************************************ 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:55.739 * Looking for test storage... 00:04:55.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.739 11:20:54 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.739 11:20:54 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.739 11:20:54 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.739 11:20:54 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.739 11:20:54 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.739 11:20:54 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.739 11:20:54 rpc -- scripts/common.sh@345 -- # : 1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.739 11:20:54 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.739 11:20:54 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.739 11:20:54 rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.739 11:20:54 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.739 11:20:54 rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.739 11:20:54 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.739 11:20:54 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.739 11:20:54 rpc -- scripts/common.sh@368 -- # return 0 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:55.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.739 --rc genhtml_branch_coverage=1 00:04:55.739 --rc genhtml_function_coverage=1 00:04:55.739 --rc genhtml_legend=1 00:04:55.739 --rc geninfo_all_blocks=1 00:04:55.739 --rc geninfo_unexecuted_blocks=1 00:04:55.739 00:04:55.739 ' 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:55.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.739 --rc genhtml_branch_coverage=1 00:04:55.739 --rc genhtml_function_coverage=1 00:04:55.739 --rc genhtml_legend=1 00:04:55.739 --rc geninfo_all_blocks=1 00:04:55.739 --rc geninfo_unexecuted_blocks=1 00:04:55.739 00:04:55.739 ' 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:55.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.739 --rc genhtml_branch_coverage=1 00:04:55.739 --rc genhtml_function_coverage=1 00:04:55.739 --rc genhtml_legend=1 00:04:55.739 --rc geninfo_all_blocks=1 00:04:55.739 --rc geninfo_unexecuted_blocks=1 00:04:55.739 00:04:55.739 ' 00:04:55.739 11:20:54 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:55.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.740 --rc genhtml_branch_coverage=1 00:04:55.740 --rc genhtml_function_coverage=1 00:04:55.740 --rc genhtml_legend=1 00:04:55.740 --rc geninfo_all_blocks=1 00:04:55.740 --rc geninfo_unexecuted_blocks=1 00:04:55.740 00:04:55.740 ' 00:04:55.740 11:20:54 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57082 00:04:55.740 11:20:54 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.740 11:20:54 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57082 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@833 -- # '[' -z 57082 ']' 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.740 11:20:54 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:04:55.740 11:20:54 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.740 [2024-11-05 11:20:55.010822] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:04:55.740 [2024-11-05 11:20:55.011119] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57082 ] 00:04:56.008 [2024-11-05 11:20:55.180337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.008 [2024-11-05 11:20:55.277365] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:56.008 [2024-11-05 11:20:55.277406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57082' to capture a snapshot of events at runtime. 00:04:56.008 [2024-11-05 11:20:55.277416] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:56.008 [2024-11-05 11:20:55.277425] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:56.008 [2024-11-05 11:20:55.277432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57082 for offline analysis/debug. 00:04:56.008 [2024-11-05 11:20:55.278301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.946 11:20:55 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:04:56.946 11:20:55 rpc -- common/autotest_common.sh@866 -- # return 0 00:04:56.946 11:20:55 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.946 11:20:55 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:56.946 11:20:55 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:56.946 11:20:55 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:56.946 11:20:55 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.946 11:20:55 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.946 11:20:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.946 ************************************ 00:04:56.946 START TEST rpc_integrity 00:04:56.946 ************************************ 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.946 { 00:04:56.946 "name": "Malloc0", 00:04:56.946 "aliases": [ 00:04:56.946 "5985ca18-ff54-4eb6-96ec-4d1f2adfd653" 00:04:56.946 ], 00:04:56.946 "product_name": "Malloc disk", 00:04:56.946 "block_size": 512, 00:04:56.946 "num_blocks": 16384, 00:04:56.946 "uuid": "5985ca18-ff54-4eb6-96ec-4d1f2adfd653", 00:04:56.946 "assigned_rate_limits": { 00:04:56.946 "rw_ios_per_sec": 0, 00:04:56.946 "rw_mbytes_per_sec": 0, 00:04:56.946 "r_mbytes_per_sec": 0, 00:04:56.946 "w_mbytes_per_sec": 0 00:04:56.946 }, 00:04:56.946 "claimed": false, 00:04:56.946 "zoned": false, 00:04:56.946 "supported_io_types": { 00:04:56.946 "read": true, 00:04:56.946 "write": true, 00:04:56.946 "unmap": true, 00:04:56.946 "flush": true, 00:04:56.946 "reset": true, 00:04:56.946 "nvme_admin": false, 00:04:56.946 "nvme_io": false, 00:04:56.946 "nvme_io_md": false, 00:04:56.946 "write_zeroes": true, 00:04:56.946 "zcopy": true, 00:04:56.946 "get_zone_info": false, 00:04:56.946 "zone_management": false, 00:04:56.946 "zone_append": false, 00:04:56.946 "compare": false, 00:04:56.946 "compare_and_write": false, 00:04:56.946 "abort": true, 00:04:56.946 "seek_hole": false, 00:04:56.946 "seek_data": false, 00:04:56.946 "copy": true, 00:04:56.946 "nvme_iov_md": false 00:04:56.946 }, 00:04:56.946 "memory_domains": [ 00:04:56.946 { 00:04:56.946 "dma_device_id": "system", 00:04:56.946 "dma_device_type": 1 00:04:56.946 }, 00:04:56.946 { 00:04:56.946 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.946 "dma_device_type": 2 00:04:56.946 } 00:04:56.946 ], 00:04:56.946 "driver_specific": {} 00:04:56.946 } 00:04:56.946 ]' 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.946 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.946 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.946 [2024-11-05 11:20:55.981876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:56.946 [2024-11-05 11:20:55.982051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.947 [2024-11-05 11:20:55.982087] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:56.947 [2024-11-05 11:20:55.982099] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.947 [2024-11-05 11:20:55.984317] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.947 [2024-11-05 11:20:55.984360] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.947 Passthru0 00:04:56.947 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:55 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.947 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:55 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.947 { 00:04:56.947 "name": "Malloc0", 00:04:56.947 "aliases": [ 00:04:56.947 "5985ca18-ff54-4eb6-96ec-4d1f2adfd653" 00:04:56.947 ], 00:04:56.947 "product_name": "Malloc disk", 00:04:56.947 "block_size": 512, 00:04:56.947 "num_blocks": 16384, 00:04:56.947 "uuid": "5985ca18-ff54-4eb6-96ec-4d1f2adfd653", 00:04:56.947 "assigned_rate_limits": { 00:04:56.947 "rw_ios_per_sec": 0, 00:04:56.947 "rw_mbytes_per_sec": 0, 00:04:56.947 "r_mbytes_per_sec": 0, 00:04:56.947 "w_mbytes_per_sec": 0 00:04:56.947 }, 00:04:56.947 "claimed": true, 00:04:56.947 "claim_type": "exclusive_write", 00:04:56.947 "zoned": false, 00:04:56.947 "supported_io_types": { 00:04:56.947 "read": true, 00:04:56.947 "write": true, 00:04:56.947 "unmap": true, 00:04:56.947 "flush": true, 00:04:56.947 "reset": true, 00:04:56.947 "nvme_admin": false, 00:04:56.947 "nvme_io": false, 00:04:56.947 "nvme_io_md": false, 00:04:56.947 "write_zeroes": true, 00:04:56.947 "zcopy": true, 00:04:56.947 "get_zone_info": false, 00:04:56.947 "zone_management": false, 00:04:56.947 "zone_append": false, 00:04:56.947 "compare": false, 00:04:56.947 "compare_and_write": false, 00:04:56.947 "abort": true, 00:04:56.947 "seek_hole": false, 00:04:56.947 "seek_data": false, 00:04:56.947 "copy": true, 00:04:56.947 "nvme_iov_md": false 00:04:56.947 }, 00:04:56.947 "memory_domains": [ 00:04:56.947 { 00:04:56.947 "dma_device_id": "system", 00:04:56.947 "dma_device_type": 1 00:04:56.947 }, 00:04:56.947 { 00:04:56.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.947 "dma_device_type": 2 00:04:56.947 } 00:04:56.947 ], 00:04:56.947 "driver_specific": {} 00:04:56.947 }, 00:04:56.947 { 00:04:56.947 "name": "Passthru0", 00:04:56.947 "aliases": [ 00:04:56.947 "8afc92f9-82b2-5ff4-b3d9-9ab713a7bc7f" 00:04:56.947 ], 00:04:56.947 "product_name": "passthru", 00:04:56.947 "block_size": 512, 00:04:56.947 "num_blocks": 16384, 00:04:56.947 "uuid": "8afc92f9-82b2-5ff4-b3d9-9ab713a7bc7f", 00:04:56.947 "assigned_rate_limits": { 00:04:56.947 "rw_ios_per_sec": 0, 00:04:56.947 "rw_mbytes_per_sec": 0, 00:04:56.947 "r_mbytes_per_sec": 0, 00:04:56.947 "w_mbytes_per_sec": 0 00:04:56.947 }, 00:04:56.947 "claimed": false, 00:04:56.947 "zoned": false, 00:04:56.947 "supported_io_types": { 00:04:56.947 "read": true, 00:04:56.947 "write": true, 00:04:56.947 "unmap": true, 00:04:56.947 "flush": true, 00:04:56.947 "reset": true, 00:04:56.947 "nvme_admin": false, 00:04:56.947 "nvme_io": false, 00:04:56.947 "nvme_io_md": false, 00:04:56.947 "write_zeroes": true, 00:04:56.947 "zcopy": true, 00:04:56.947 "get_zone_info": false, 00:04:56.947 "zone_management": false, 00:04:56.947 "zone_append": false, 00:04:56.947 "compare": false, 00:04:56.947 "compare_and_write": false, 00:04:56.947 "abort": true, 00:04:56.947 "seek_hole": false, 00:04:56.947 "seek_data": false, 00:04:56.947 "copy": true, 00:04:56.947 "nvme_iov_md": false 00:04:56.947 }, 00:04:56.947 "memory_domains": [ 00:04:56.947 { 00:04:56.947 "dma_device_id": "system", 00:04:56.947 "dma_device_type": 1 00:04:56.947 }, 00:04:56.947 { 00:04:56.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.947 "dma_device_type": 2 00:04:56.947 } 00:04:56.947 ], 00:04:56.947 "driver_specific": { 00:04:56.947 "passthru": { 00:04:56.947 "name": "Passthru0", 00:04:56.947 "base_bdev_name": "Malloc0" 00:04:56.947 } 00:04:56.947 } 00:04:56.947 } 00:04:56.947 ]' 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.947 ************************************ 00:04:56.947 END TEST rpc_integrity 00:04:56.947 ************************************ 00:04:56.947 11:20:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.947 00:04:56.947 real 0m0.246s 00:04:56.947 user 0m0.124s 00:04:56.947 sys 0m0.036s 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:56.947 11:20:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:56.947 11:20:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:56.947 11:20:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 ************************************ 00:04:56.947 START TEST rpc_plugins 00:04:56.947 ************************************ 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.947 { 00:04:56.947 "name": "Malloc1", 00:04:56.947 "aliases": [ 00:04:56.947 "a0773a2f-fe35-4ad2-be3e-e26ce4f66cb2" 00:04:56.947 ], 00:04:56.947 "product_name": "Malloc disk", 00:04:56.947 "block_size": 4096, 00:04:56.947 "num_blocks": 256, 00:04:56.947 "uuid": "a0773a2f-fe35-4ad2-be3e-e26ce4f66cb2", 00:04:56.947 "assigned_rate_limits": { 00:04:56.947 "rw_ios_per_sec": 0, 00:04:56.947 "rw_mbytes_per_sec": 0, 00:04:56.947 "r_mbytes_per_sec": 0, 00:04:56.947 "w_mbytes_per_sec": 0 00:04:56.947 }, 00:04:56.947 "claimed": false, 00:04:56.947 "zoned": false, 00:04:56.947 "supported_io_types": { 00:04:56.947 "read": true, 00:04:56.947 "write": true, 00:04:56.947 "unmap": true, 00:04:56.947 "flush": true, 00:04:56.947 "reset": true, 00:04:56.947 "nvme_admin": false, 00:04:56.947 "nvme_io": false, 00:04:56.947 "nvme_io_md": false, 00:04:56.947 "write_zeroes": true, 00:04:56.947 "zcopy": true, 00:04:56.947 "get_zone_info": false, 00:04:56.947 "zone_management": false, 00:04:56.947 "zone_append": false, 00:04:56.947 "compare": false, 00:04:56.947 "compare_and_write": false, 00:04:56.947 "abort": true, 00:04:56.947 "seek_hole": false, 00:04:56.947 "seek_data": false, 00:04:56.947 "copy": true, 00:04:56.947 "nvme_iov_md": false 00:04:56.947 }, 00:04:56.947 "memory_domains": [ 00:04:56.947 { 00:04:56.947 "dma_device_id": "system", 00:04:56.947 "dma_device_type": 1 00:04:56.947 }, 00:04:56.947 { 00:04:56.947 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.947 "dma_device_type": 2 00:04:56.947 } 00:04:56.947 ], 00:04:56.947 "driver_specific": {} 00:04:56.947 } 00:04:56.947 ]' 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.947 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:56.947 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.209 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.209 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:57.209 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:57.209 ************************************ 00:04:57.209 END TEST rpc_plugins 00:04:57.209 ************************************ 00:04:57.209 11:20:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:57.209 00:04:57.209 real 0m0.124s 00:04:57.209 user 0m0.077s 00:04:57.209 sys 0m0.015s 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.209 11:20:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:57.209 11:20:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:57.209 11:20:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.209 11:20:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.209 11:20:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.209 ************************************ 00:04:57.209 START TEST rpc_trace_cmd_test 00:04:57.209 ************************************ 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:57.209 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57082", 00:04:57.209 "tpoint_group_mask": "0x8", 00:04:57.209 "iscsi_conn": { 00:04:57.209 "mask": "0x2", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "scsi": { 00:04:57.209 "mask": "0x4", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "bdev": { 00:04:57.209 "mask": "0x8", 00:04:57.209 "tpoint_mask": "0xffffffffffffffff" 00:04:57.209 }, 00:04:57.209 "nvmf_rdma": { 00:04:57.209 "mask": "0x10", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "nvmf_tcp": { 00:04:57.209 "mask": "0x20", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "ftl": { 00:04:57.209 "mask": "0x40", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "blobfs": { 00:04:57.209 "mask": "0x80", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "dsa": { 00:04:57.209 "mask": "0x200", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "thread": { 00:04:57.209 "mask": "0x400", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "nvme_pcie": { 00:04:57.209 "mask": "0x800", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "iaa": { 00:04:57.209 "mask": "0x1000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "nvme_tcp": { 00:04:57.209 "mask": "0x2000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "bdev_nvme": { 00:04:57.209 "mask": "0x4000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "sock": { 00:04:57.209 "mask": "0x8000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "blob": { 00:04:57.209 "mask": "0x10000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "bdev_raid": { 00:04:57.209 "mask": "0x20000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 }, 00:04:57.209 "scheduler": { 00:04:57.209 "mask": "0x40000", 00:04:57.209 "tpoint_mask": "0x0" 00:04:57.209 } 00:04:57.209 }' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:57.209 ************************************ 00:04:57.209 END TEST rpc_trace_cmd_test 00:04:57.209 ************************************ 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:57.209 00:04:57.209 real 0m0.168s 00:04:57.209 user 0m0.139s 00:04:57.209 sys 0m0.021s 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.209 11:20:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 11:20:56 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:57.471 11:20:56 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:57.471 11:20:56 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:57.471 11:20:56 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:57.471 11:20:56 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:57.471 11:20:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 ************************************ 00:04:57.471 START TEST rpc_daemon_integrity 00:04:57.471 ************************************ 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:57.471 { 00:04:57.471 "name": "Malloc2", 00:04:57.471 "aliases": [ 00:04:57.471 "a0e0a445-6dcb-4744-9594-4e74d6ffdba9" 00:04:57.471 ], 00:04:57.471 "product_name": "Malloc disk", 00:04:57.471 "block_size": 512, 00:04:57.471 "num_blocks": 16384, 00:04:57.471 "uuid": "a0e0a445-6dcb-4744-9594-4e74d6ffdba9", 00:04:57.471 "assigned_rate_limits": { 00:04:57.471 "rw_ios_per_sec": 0, 00:04:57.471 "rw_mbytes_per_sec": 0, 00:04:57.471 "r_mbytes_per_sec": 0, 00:04:57.471 "w_mbytes_per_sec": 0 00:04:57.471 }, 00:04:57.471 "claimed": false, 00:04:57.471 "zoned": false, 00:04:57.471 "supported_io_types": { 00:04:57.471 "read": true, 00:04:57.471 "write": true, 00:04:57.471 "unmap": true, 00:04:57.471 "flush": true, 00:04:57.471 "reset": true, 00:04:57.471 "nvme_admin": false, 00:04:57.471 "nvme_io": false, 00:04:57.471 "nvme_io_md": false, 00:04:57.471 "write_zeroes": true, 00:04:57.471 "zcopy": true, 00:04:57.471 "get_zone_info": false, 00:04:57.471 "zone_management": false, 00:04:57.471 "zone_append": false, 00:04:57.471 "compare": false, 00:04:57.471 "compare_and_write": false, 00:04:57.471 "abort": true, 00:04:57.471 "seek_hole": false, 00:04:57.471 "seek_data": false, 00:04:57.471 "copy": true, 00:04:57.471 "nvme_iov_md": false 00:04:57.471 }, 00:04:57.471 "memory_domains": [ 00:04:57.471 { 00:04:57.471 "dma_device_id": "system", 00:04:57.471 "dma_device_type": 1 00:04:57.471 }, 00:04:57.471 { 00:04:57.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.471 "dma_device_type": 2 00:04:57.471 } 00:04:57.471 ], 00:04:57.471 "driver_specific": {} 00:04:57.471 } 00:04:57.471 ]' 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 [2024-11-05 11:20:56.628854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:57.471 [2024-11-05 11:20:56.628994] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:57.471 [2024-11-05 11:20:56.629018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:57.471 [2024-11-05 11:20:56.629029] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:57.471 [2024-11-05 11:20:56.631137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:57.471 [2024-11-05 11:20:56.631172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:57.471 Passthru0 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.471 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:57.471 { 00:04:57.471 "name": "Malloc2", 00:04:57.471 "aliases": [ 00:04:57.471 "a0e0a445-6dcb-4744-9594-4e74d6ffdba9" 00:04:57.471 ], 00:04:57.471 "product_name": "Malloc disk", 00:04:57.471 "block_size": 512, 00:04:57.471 "num_blocks": 16384, 00:04:57.471 "uuid": "a0e0a445-6dcb-4744-9594-4e74d6ffdba9", 00:04:57.471 "assigned_rate_limits": { 00:04:57.471 "rw_ios_per_sec": 0, 00:04:57.471 "rw_mbytes_per_sec": 0, 00:04:57.471 "r_mbytes_per_sec": 0, 00:04:57.471 "w_mbytes_per_sec": 0 00:04:57.471 }, 00:04:57.471 "claimed": true, 00:04:57.471 "claim_type": "exclusive_write", 00:04:57.471 "zoned": false, 00:04:57.471 "supported_io_types": { 00:04:57.471 "read": true, 00:04:57.471 "write": true, 00:04:57.471 "unmap": true, 00:04:57.471 "flush": true, 00:04:57.471 "reset": true, 00:04:57.471 "nvme_admin": false, 00:04:57.471 "nvme_io": false, 00:04:57.471 "nvme_io_md": false, 00:04:57.471 "write_zeroes": true, 00:04:57.471 "zcopy": true, 00:04:57.471 "get_zone_info": false, 00:04:57.471 "zone_management": false, 00:04:57.471 "zone_append": false, 00:04:57.471 "compare": false, 00:04:57.471 "compare_and_write": false, 00:04:57.471 "abort": true, 00:04:57.471 "seek_hole": false, 00:04:57.471 "seek_data": false, 00:04:57.471 "copy": true, 00:04:57.471 "nvme_iov_md": false 00:04:57.471 }, 00:04:57.471 "memory_domains": [ 00:04:57.471 { 00:04:57.471 "dma_device_id": "system", 00:04:57.471 "dma_device_type": 1 00:04:57.471 }, 00:04:57.471 { 00:04:57.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.471 "dma_device_type": 2 00:04:57.471 } 00:04:57.471 ], 00:04:57.471 "driver_specific": {} 00:04:57.471 }, 00:04:57.471 { 00:04:57.471 "name": "Passthru0", 00:04:57.471 "aliases": [ 00:04:57.471 "af451c4f-419e-519a-9eeb-100f84ca4214" 00:04:57.471 ], 00:04:57.471 "product_name": "passthru", 00:04:57.471 "block_size": 512, 00:04:57.471 "num_blocks": 16384, 00:04:57.471 "uuid": "af451c4f-419e-519a-9eeb-100f84ca4214", 00:04:57.471 "assigned_rate_limits": { 00:04:57.471 "rw_ios_per_sec": 0, 00:04:57.471 "rw_mbytes_per_sec": 0, 00:04:57.471 "r_mbytes_per_sec": 0, 00:04:57.471 "w_mbytes_per_sec": 0 00:04:57.471 }, 00:04:57.471 "claimed": false, 00:04:57.471 "zoned": false, 00:04:57.471 "supported_io_types": { 00:04:57.471 "read": true, 00:04:57.471 "write": true, 00:04:57.471 "unmap": true, 00:04:57.471 "flush": true, 00:04:57.471 "reset": true, 00:04:57.471 "nvme_admin": false, 00:04:57.471 "nvme_io": false, 00:04:57.471 "nvme_io_md": false, 00:04:57.471 "write_zeroes": true, 00:04:57.471 "zcopy": true, 00:04:57.472 "get_zone_info": false, 00:04:57.472 "zone_management": false, 00:04:57.472 "zone_append": false, 00:04:57.472 "compare": false, 00:04:57.472 "compare_and_write": false, 00:04:57.472 "abort": true, 00:04:57.472 "seek_hole": false, 00:04:57.472 "seek_data": false, 00:04:57.472 "copy": true, 00:04:57.472 "nvme_iov_md": false 00:04:57.472 }, 00:04:57.472 "memory_domains": [ 00:04:57.472 { 00:04:57.472 "dma_device_id": "system", 00:04:57.472 "dma_device_type": 1 00:04:57.472 }, 00:04:57.472 { 00:04:57.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:57.472 "dma_device_type": 2 00:04:57.472 } 00:04:57.472 ], 00:04:57.472 "driver_specific": { 00:04:57.472 "passthru": { 00:04:57.472 "name": "Passthru0", 00:04:57.472 "base_bdev_name": "Malloc2" 00:04:57.472 } 00:04:57.472 } 00:04:57.472 } 00:04:57.472 ]' 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:57.472 ************************************ 00:04:57.472 END TEST rpc_daemon_integrity 00:04:57.472 ************************************ 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:57.472 00:04:57.472 real 0m0.233s 00:04:57.472 user 0m0.120s 00:04:57.472 sys 0m0.030s 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:57.472 11:20:56 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:57.733 11:20:56 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:57.733 11:20:56 rpc -- rpc/rpc.sh@84 -- # killprocess 57082 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@952 -- # '[' -z 57082 ']' 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@956 -- # kill -0 57082 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@957 -- # uname 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57082 00:04:57.733 killing process with pid 57082 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57082' 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@971 -- # kill 57082 00:04:57.733 11:20:56 rpc -- common/autotest_common.sh@976 -- # wait 57082 00:04:59.128 ************************************ 00:04:59.128 END TEST rpc 00:04:59.128 ************************************ 00:04:59.128 00:04:59.128 real 0m3.522s 00:04:59.128 user 0m3.914s 00:04:59.128 sys 0m0.594s 00:04:59.128 11:20:58 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:59.128 11:20:58 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.128 11:20:58 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:59.128 11:20:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.128 11:20:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.128 11:20:58 -- common/autotest_common.sh@10 -- # set +x 00:04:59.128 ************************************ 00:04:59.128 START TEST skip_rpc 00:04:59.128 ************************************ 00:04:59.128 11:20:58 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:59.390 * Looking for test storage... 00:04:59.390 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:59.390 11:20:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.390 --rc genhtml_branch_coverage=1 00:04:59.390 --rc genhtml_function_coverage=1 00:04:59.390 --rc genhtml_legend=1 00:04:59.390 --rc geninfo_all_blocks=1 00:04:59.390 --rc geninfo_unexecuted_blocks=1 00:04:59.390 00:04:59.390 ' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.390 --rc genhtml_branch_coverage=1 00:04:59.390 --rc genhtml_function_coverage=1 00:04:59.390 --rc genhtml_legend=1 00:04:59.390 --rc geninfo_all_blocks=1 00:04:59.390 --rc geninfo_unexecuted_blocks=1 00:04:59.390 00:04:59.390 ' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.390 --rc genhtml_branch_coverage=1 00:04:59.390 --rc genhtml_function_coverage=1 00:04:59.390 --rc genhtml_legend=1 00:04:59.390 --rc geninfo_all_blocks=1 00:04:59.390 --rc geninfo_unexecuted_blocks=1 00:04:59.390 00:04:59.390 ' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:59.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:59.390 --rc genhtml_branch_coverage=1 00:04:59.390 --rc genhtml_function_coverage=1 00:04:59.390 --rc genhtml_legend=1 00:04:59.390 --rc geninfo_all_blocks=1 00:04:59.390 --rc geninfo_unexecuted_blocks=1 00:04:59.390 00:04:59.390 ' 00:04:59.390 11:20:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:59.390 11:20:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.390 11:20:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:59.390 11:20:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.390 ************************************ 00:04:59.390 START TEST skip_rpc 00:04:59.390 ************************************ 00:04:59.390 11:20:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:04:59.390 11:20:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57295 00:04:59.390 11:20:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:59.390 11:20:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:59.390 11:20:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:59.390 [2024-11-05 11:20:58.601193] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:04:59.390 [2024-11-05 11:20:58.601352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57295 ] 00:04:59.649 [2024-11-05 11:20:58.768529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.649 [2024-11-05 11:20:58.892028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57295 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 57295 ']' 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 57295 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57295 00:05:04.934 killing process with pid 57295 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57295' 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 57295 00:05:04.934 11:21:03 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 57295 00:05:05.508 ************************************ 00:05:05.508 END TEST skip_rpc 00:05:05.508 ************************************ 00:05:05.508 00:05:05.508 real 0m6.241s 00:05:05.508 user 0m5.759s 00:05:05.508 sys 0m0.369s 00:05:05.508 11:21:04 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:05.508 11:21:04 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.768 11:21:04 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:05.768 11:21:04 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:05.768 11:21:04 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:05.768 11:21:04 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:05.768 ************************************ 00:05:05.768 START TEST skip_rpc_with_json 00:05:05.768 ************************************ 00:05:05.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57392 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57392 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 57392 ']' 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:05.768 11:21:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:05.768 [2024-11-05 11:21:04.871633] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:05.768 [2024-11-05 11:21:04.871946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57392 ] 00:05:05.768 [2024-11-05 11:21:05.028569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.027 [2024-11-05 11:21:05.132294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.593 [2024-11-05 11:21:05.730992] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:06.593 request: 00:05:06.593 { 00:05:06.593 "trtype": "tcp", 00:05:06.593 "method": "nvmf_get_transports", 00:05:06.593 "req_id": 1 00:05:06.593 } 00:05:06.593 Got JSON-RPC error response 00:05:06.593 response: 00:05:06.593 { 00:05:06.593 "code": -19, 00:05:06.593 "message": "No such device" 00:05:06.593 } 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.593 [2024-11-05 11:21:05.739101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:06.593 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:06.851 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:06.851 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:06.851 { 00:05:06.851 "subsystems": [ 00:05:06.851 { 00:05:06.851 "subsystem": "fsdev", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "fsdev_set_opts", 00:05:06.851 "params": { 00:05:06.851 "fsdev_io_pool_size": 65535, 00:05:06.851 "fsdev_io_cache_size": 256 00:05:06.851 } 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "keyring", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "iobuf", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "iobuf_set_options", 00:05:06.851 "params": { 00:05:06.851 "small_pool_count": 8192, 00:05:06.851 "large_pool_count": 1024, 00:05:06.851 "small_bufsize": 8192, 00:05:06.851 "large_bufsize": 135168, 00:05:06.851 "enable_numa": false 00:05:06.851 } 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "sock", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "sock_set_default_impl", 00:05:06.851 "params": { 00:05:06.851 "impl_name": "posix" 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "sock_impl_set_options", 00:05:06.851 "params": { 00:05:06.851 "impl_name": "ssl", 00:05:06.851 "recv_buf_size": 4096, 00:05:06.851 "send_buf_size": 4096, 00:05:06.851 "enable_recv_pipe": true, 00:05:06.851 "enable_quickack": false, 00:05:06.851 "enable_placement_id": 0, 00:05:06.851 "enable_zerocopy_send_server": true, 00:05:06.851 "enable_zerocopy_send_client": false, 00:05:06.851 "zerocopy_threshold": 0, 00:05:06.851 "tls_version": 0, 00:05:06.851 "enable_ktls": false 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "sock_impl_set_options", 00:05:06.851 "params": { 00:05:06.851 "impl_name": "posix", 00:05:06.851 "recv_buf_size": 2097152, 00:05:06.851 "send_buf_size": 2097152, 00:05:06.851 "enable_recv_pipe": true, 00:05:06.851 "enable_quickack": false, 00:05:06.851 "enable_placement_id": 0, 00:05:06.851 "enable_zerocopy_send_server": true, 00:05:06.851 "enable_zerocopy_send_client": false, 00:05:06.851 "zerocopy_threshold": 0, 00:05:06.851 "tls_version": 0, 00:05:06.851 "enable_ktls": false 00:05:06.851 } 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "vmd", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "accel", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "accel_set_options", 00:05:06.851 "params": { 00:05:06.851 "small_cache_size": 128, 00:05:06.851 "large_cache_size": 16, 00:05:06.851 "task_count": 2048, 00:05:06.851 "sequence_count": 2048, 00:05:06.851 "buf_count": 2048 00:05:06.851 } 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "bdev", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "bdev_set_options", 00:05:06.851 "params": { 00:05:06.851 "bdev_io_pool_size": 65535, 00:05:06.851 "bdev_io_cache_size": 256, 00:05:06.851 "bdev_auto_examine": true, 00:05:06.851 "iobuf_small_cache_size": 128, 00:05:06.851 "iobuf_large_cache_size": 16 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "bdev_raid_set_options", 00:05:06.851 "params": { 00:05:06.851 "process_window_size_kb": 1024, 00:05:06.851 "process_max_bandwidth_mb_sec": 0 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "bdev_iscsi_set_options", 00:05:06.851 "params": { 00:05:06.851 "timeout_sec": 30 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "bdev_nvme_set_options", 00:05:06.851 "params": { 00:05:06.851 "action_on_timeout": "none", 00:05:06.851 "timeout_us": 0, 00:05:06.851 "timeout_admin_us": 0, 00:05:06.851 "keep_alive_timeout_ms": 10000, 00:05:06.851 "arbitration_burst": 0, 00:05:06.851 "low_priority_weight": 0, 00:05:06.851 "medium_priority_weight": 0, 00:05:06.851 "high_priority_weight": 0, 00:05:06.851 "nvme_adminq_poll_period_us": 10000, 00:05:06.851 "nvme_ioq_poll_period_us": 0, 00:05:06.851 "io_queue_requests": 0, 00:05:06.851 "delay_cmd_submit": true, 00:05:06.851 "transport_retry_count": 4, 00:05:06.851 "bdev_retry_count": 3, 00:05:06.851 "transport_ack_timeout": 0, 00:05:06.851 "ctrlr_loss_timeout_sec": 0, 00:05:06.851 "reconnect_delay_sec": 0, 00:05:06.851 "fast_io_fail_timeout_sec": 0, 00:05:06.851 "disable_auto_failback": false, 00:05:06.851 "generate_uuids": false, 00:05:06.851 "transport_tos": 0, 00:05:06.851 "nvme_error_stat": false, 00:05:06.851 "rdma_srq_size": 0, 00:05:06.851 "io_path_stat": false, 00:05:06.851 "allow_accel_sequence": false, 00:05:06.851 "rdma_max_cq_size": 0, 00:05:06.851 "rdma_cm_event_timeout_ms": 0, 00:05:06.851 "dhchap_digests": [ 00:05:06.851 "sha256", 00:05:06.851 "sha384", 00:05:06.851 "sha512" 00:05:06.851 ], 00:05:06.851 "dhchap_dhgroups": [ 00:05:06.851 "null", 00:05:06.851 "ffdhe2048", 00:05:06.851 "ffdhe3072", 00:05:06.851 "ffdhe4096", 00:05:06.851 "ffdhe6144", 00:05:06.851 "ffdhe8192" 00:05:06.851 ] 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "bdev_nvme_set_hotplug", 00:05:06.851 "params": { 00:05:06.851 "period_us": 100000, 00:05:06.851 "enable": false 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "bdev_wait_for_examine" 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "scsi", 00:05:06.851 "config": null 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "scheduler", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "framework_set_scheduler", 00:05:06.851 "params": { 00:05:06.851 "name": "static" 00:05:06.851 } 00:05:06.851 } 00:05:06.851 ] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "vhost_scsi", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "vhost_blk", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "ublk", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "nbd", 00:05:06.851 "config": [] 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "subsystem": "nvmf", 00:05:06.851 "config": [ 00:05:06.851 { 00:05:06.851 "method": "nvmf_set_config", 00:05:06.851 "params": { 00:05:06.851 "discovery_filter": "match_any", 00:05:06.851 "admin_cmd_passthru": { 00:05:06.851 "identify_ctrlr": false 00:05:06.851 }, 00:05:06.851 "dhchap_digests": [ 00:05:06.851 "sha256", 00:05:06.851 "sha384", 00:05:06.851 "sha512" 00:05:06.851 ], 00:05:06.851 "dhchap_dhgroups": [ 00:05:06.851 "null", 00:05:06.851 "ffdhe2048", 00:05:06.851 "ffdhe3072", 00:05:06.851 "ffdhe4096", 00:05:06.851 "ffdhe6144", 00:05:06.851 "ffdhe8192" 00:05:06.851 ] 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "nvmf_set_max_subsystems", 00:05:06.851 "params": { 00:05:06.851 "max_subsystems": 1024 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "nvmf_set_crdt", 00:05:06.851 "params": { 00:05:06.851 "crdt1": 0, 00:05:06.851 "crdt2": 0, 00:05:06.851 "crdt3": 0 00:05:06.851 } 00:05:06.851 }, 00:05:06.851 { 00:05:06.851 "method": "nvmf_create_transport", 00:05:06.851 "params": { 00:05:06.851 "trtype": "TCP", 00:05:06.851 "max_queue_depth": 128, 00:05:06.851 "max_io_qpairs_per_ctrlr": 127, 00:05:06.852 "in_capsule_data_size": 4096, 00:05:06.852 "max_io_size": 131072, 00:05:06.852 "io_unit_size": 131072, 00:05:06.852 "max_aq_depth": 128, 00:05:06.852 "num_shared_buffers": 511, 00:05:06.852 "buf_cache_size": 4294967295, 00:05:06.852 "dif_insert_or_strip": false, 00:05:06.852 "zcopy": false, 00:05:06.852 "c2h_success": true, 00:05:06.852 "sock_priority": 0, 00:05:06.852 "abort_timeout_sec": 1, 00:05:06.852 "ack_timeout": 0, 00:05:06.852 "data_wr_pool_size": 0 00:05:06.852 } 00:05:06.852 } 00:05:06.852 ] 00:05:06.852 }, 00:05:06.852 { 00:05:06.852 "subsystem": "iscsi", 00:05:06.852 "config": [ 00:05:06.852 { 00:05:06.852 "method": "iscsi_set_options", 00:05:06.852 "params": { 00:05:06.852 "node_base": "iqn.2016-06.io.spdk", 00:05:06.852 "max_sessions": 128, 00:05:06.852 "max_connections_per_session": 2, 00:05:06.852 "max_queue_depth": 64, 00:05:06.852 "default_time2wait": 2, 00:05:06.852 "default_time2retain": 20, 00:05:06.852 "first_burst_length": 8192, 00:05:06.852 "immediate_data": true, 00:05:06.852 "allow_duplicated_isid": false, 00:05:06.852 "error_recovery_level": 0, 00:05:06.852 "nop_timeout": 60, 00:05:06.852 "nop_in_interval": 30, 00:05:06.852 "disable_chap": false, 00:05:06.852 "require_chap": false, 00:05:06.852 "mutual_chap": false, 00:05:06.852 "chap_group": 0, 00:05:06.852 "max_large_datain_per_connection": 64, 00:05:06.852 "max_r2t_per_connection": 4, 00:05:06.852 "pdu_pool_size": 36864, 00:05:06.852 "immediate_data_pool_size": 16384, 00:05:06.852 "data_out_pool_size": 2048 00:05:06.852 } 00:05:06.852 } 00:05:06.852 ] 00:05:06.852 } 00:05:06.852 ] 00:05:06.852 } 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57392 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57392 ']' 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57392 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57392 00:05:06.852 killing process with pid 57392 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57392' 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57392 00:05:06.852 11:21:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57392 00:05:08.233 11:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57427 00:05:08.233 11:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.233 11:21:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57427 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 57427 ']' 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 57427 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57427 00:05:13.521 killing process with pid 57427 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57427' 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 57427 00:05:13.521 11:21:12 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 57427 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:14.906 ************************************ 00:05:14.906 END TEST skip_rpc_with_json 00:05:14.906 ************************************ 00:05:14.906 00:05:14.906 real 0m9.040s 00:05:14.906 user 0m8.649s 00:05:14.906 sys 0m0.605s 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:14.906 11:21:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:14.906 11:21:13 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.906 11:21:13 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.906 11:21:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.906 ************************************ 00:05:14.906 START TEST skip_rpc_with_delay 00:05:14.906 ************************************ 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:14.906 11:21:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:14.906 [2024-11-05 11:21:13.975363] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:14.906 ************************************ 00:05:14.906 END TEST skip_rpc_with_delay 00:05:14.906 ************************************ 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:14.906 00:05:14.906 real 0m0.133s 00:05:14.906 user 0m0.066s 00:05:14.906 sys 0m0.065s 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:14.906 11:21:14 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:14.906 11:21:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:14.906 11:21:14 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:14.906 11:21:14 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:14.906 11:21:14 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:14.906 11:21:14 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:14.906 11:21:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.906 ************************************ 00:05:14.906 START TEST exit_on_failed_rpc_init 00:05:14.906 ************************************ 00:05:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57555 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57555 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 57555 ']' 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:14.906 11:21:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:14.906 [2024-11-05 11:21:14.140268] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:14.906 [2024-11-05 11:21:14.140392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57555 ] 00:05:15.166 [2024-11-05 11:21:14.296028] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.166 [2024-11-05 11:21:14.396761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:16.118 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:16.118 [2024-11-05 11:21:15.158480] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:16.118 [2024-11-05 11:21:15.158861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57570 ] 00:05:16.118 [2024-11-05 11:21:15.322351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.379 [2024-11-05 11:21:15.449969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:16.379 [2024-11-05 11:21:15.450088] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:16.379 [2024-11-05 11:21:15.450104] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:16.379 [2024-11-05 11:21:15.450120] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57555 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 57555 ']' 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 57555 00:05:16.379 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57555 00:05:16.639 killing process with pid 57555 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57555' 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 57555 00:05:16.639 11:21:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 57555 00:05:18.019 ************************************ 00:05:18.019 END TEST exit_on_failed_rpc_init 00:05:18.019 ************************************ 00:05:18.019 00:05:18.019 real 0m3.161s 00:05:18.019 user 0m3.455s 00:05:18.019 sys 0m0.488s 00:05:18.019 11:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.019 11:21:17 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:18.019 11:21:17 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:18.019 ************************************ 00:05:18.019 END TEST skip_rpc 00:05:18.019 ************************************ 00:05:18.019 00:05:18.019 real 0m18.899s 00:05:18.019 user 0m18.076s 00:05:18.019 sys 0m1.676s 00:05:18.019 11:21:17 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.019 11:21:17 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.019 11:21:17 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:18.019 11:21:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.019 11:21:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.019 11:21:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.019 ************************************ 00:05:18.019 START TEST rpc_client 00:05:18.019 ************************************ 00:05:18.019 11:21:17 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:18.280 * Looking for test storage... 00:05:18.280 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.280 11:21:17 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.280 --rc genhtml_branch_coverage=1 00:05:18.280 --rc genhtml_function_coverage=1 00:05:18.280 --rc genhtml_legend=1 00:05:18.280 --rc geninfo_all_blocks=1 00:05:18.280 --rc geninfo_unexecuted_blocks=1 00:05:18.280 00:05:18.280 ' 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.280 --rc genhtml_branch_coverage=1 00:05:18.280 --rc genhtml_function_coverage=1 00:05:18.280 --rc genhtml_legend=1 00:05:18.280 --rc geninfo_all_blocks=1 00:05:18.280 --rc geninfo_unexecuted_blocks=1 00:05:18.280 00:05:18.280 ' 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.280 --rc genhtml_branch_coverage=1 00:05:18.280 --rc genhtml_function_coverage=1 00:05:18.280 --rc genhtml_legend=1 00:05:18.280 --rc geninfo_all_blocks=1 00:05:18.280 --rc geninfo_unexecuted_blocks=1 00:05:18.280 00:05:18.280 ' 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.280 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.280 --rc genhtml_branch_coverage=1 00:05:18.280 --rc genhtml_function_coverage=1 00:05:18.280 --rc genhtml_legend=1 00:05:18.280 --rc geninfo_all_blocks=1 00:05:18.280 --rc geninfo_unexecuted_blocks=1 00:05:18.280 00:05:18.280 ' 00:05:18.280 11:21:17 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:18.280 OK 00:05:18.280 11:21:17 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:18.280 00:05:18.280 real 0m0.179s 00:05:18.280 user 0m0.097s 00:05:18.280 sys 0m0.088s 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.280 11:21:17 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:18.280 ************************************ 00:05:18.280 END TEST rpc_client 00:05:18.280 ************************************ 00:05:18.280 11:21:17 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.280 11:21:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.280 11:21:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.280 11:21:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.280 ************************************ 00:05:18.280 START TEST json_config 00:05:18.280 ************************************ 00:05:18.280 11:21:17 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.543 11:21:17 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.543 11:21:17 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.543 11:21:17 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.543 11:21:17 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.543 11:21:17 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.543 11:21:17 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:18.543 11:21:17 json_config -- scripts/common.sh@345 -- # : 1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.543 11:21:17 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.543 11:21:17 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@353 -- # local d=1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.543 11:21:17 json_config -- scripts/common.sh@355 -- # echo 1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.543 11:21:17 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@353 -- # local d=2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.543 11:21:17 json_config -- scripts/common.sh@355 -- # echo 2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.543 11:21:17 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.543 11:21:17 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.543 11:21:17 json_config -- scripts/common.sh@368 -- # return 0 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.543 --rc genhtml_branch_coverage=1 00:05:18.543 --rc genhtml_function_coverage=1 00:05:18.543 --rc genhtml_legend=1 00:05:18.543 --rc geninfo_all_blocks=1 00:05:18.543 --rc geninfo_unexecuted_blocks=1 00:05:18.543 00:05:18.543 ' 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.543 11:21:17 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.543 11:21:17 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.543 11:21:17 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.543 11:21:17 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.543 11:21:17 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.543 11:21:17 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.543 11:21:17 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.543 11:21:17 json_config -- paths/export.sh@5 -- # export PATH 00:05:18.543 11:21:17 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@51 -- # : 0 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.543 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.543 11:21:17 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:18.543 WARNING: No tests are enabled so not running JSON configuration tests 00:05:18.543 11:21:17 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:18.543 00:05:18.543 real 0m0.148s 00:05:18.543 user 0m0.083s 00:05:18.543 sys 0m0.064s 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:18.543 11:21:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:18.543 ************************************ 00:05:18.543 END TEST json_config 00:05:18.543 ************************************ 00:05:18.544 11:21:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.544 11:21:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:18.544 11:21:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:18.544 11:21:17 -- common/autotest_common.sh@10 -- # set +x 00:05:18.544 ************************************ 00:05:18.544 START TEST json_config_extra_key 00:05:18.544 ************************************ 00:05:18.544 11:21:17 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:18.544 11:21:17 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:18.544 11:21:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:18.544 11:21:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.805 11:21:17 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:18.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.805 --rc genhtml_branch_coverage=1 00:05:18.805 --rc genhtml_function_coverage=1 00:05:18.805 --rc genhtml_legend=1 00:05:18.805 --rc geninfo_all_blocks=1 00:05:18.805 --rc geninfo_unexecuted_blocks=1 00:05:18.805 00:05:18.805 ' 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:18.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.805 --rc genhtml_branch_coverage=1 00:05:18.805 --rc genhtml_function_coverage=1 00:05:18.805 --rc genhtml_legend=1 00:05:18.805 --rc geninfo_all_blocks=1 00:05:18.805 --rc geninfo_unexecuted_blocks=1 00:05:18.805 00:05:18.805 ' 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:18.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.805 --rc genhtml_branch_coverage=1 00:05:18.805 --rc genhtml_function_coverage=1 00:05:18.805 --rc genhtml_legend=1 00:05:18.805 --rc geninfo_all_blocks=1 00:05:18.805 --rc geninfo_unexecuted_blocks=1 00:05:18.805 00:05:18.805 ' 00:05:18.805 11:21:17 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:18.805 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.805 --rc genhtml_branch_coverage=1 00:05:18.805 --rc genhtml_function_coverage=1 00:05:18.806 --rc genhtml_legend=1 00:05:18.806 --rc geninfo_all_blocks=1 00:05:18.806 --rc geninfo_unexecuted_blocks=1 00:05:18.806 00:05:18.806 ' 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=e5c045c2-6111-49f2-a3c8-a62ffafc47a5 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:18.806 11:21:17 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:18.806 11:21:17 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:18.806 11:21:17 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:18.806 11:21:17 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:18.806 11:21:17 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.806 11:21:17 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.806 11:21:17 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.806 11:21:17 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:18.806 11:21:17 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:18.806 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:18.806 11:21:17 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:18.806 INFO: launching applications... 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:18.806 11:21:17 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:18.806 Waiting for target to run... 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57767 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57767 /var/tmp/spdk_tgt.sock 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 57767 ']' 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:18.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:18.806 11:21:17 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:18.806 11:21:17 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:18.806 [2024-11-05 11:21:17.960006] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:18.806 [2024-11-05 11:21:17.960379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57767 ] 00:05:19.377 [2024-11-05 11:21:18.365648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.377 [2024-11-05 11:21:18.507329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.947 00:05:19.947 INFO: shutting down applications... 00:05:19.947 11:21:19 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:19.947 11:21:19 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:19.947 11:21:19 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:19.947 11:21:19 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57767 ]] 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57767 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57767 00:05:19.947 11:21:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.526 11:21:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.526 11:21:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.526 11:21:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57767 00:05:20.526 11:21:19 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:20.788 11:21:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:20.788 11:21:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:20.788 11:21:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57767 00:05:20.788 11:21:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.359 11:21:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.359 11:21:20 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.360 11:21:20 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57767 00:05:21.360 11:21:20 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57767 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:21.932 11:21:21 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:21.932 SPDK target shutdown done 00:05:21.932 Success 00:05:21.932 11:21:21 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:21.932 00:05:21.932 real 0m3.347s 00:05:21.932 user 0m2.977s 00:05:21.932 sys 0m0.512s 00:05:21.932 ************************************ 00:05:21.932 END TEST json_config_extra_key 00:05:21.932 ************************************ 00:05:21.932 11:21:21 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:21.932 11:21:21 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:21.932 11:21:21 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.932 11:21:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:21.932 11:21:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:21.932 11:21:21 -- common/autotest_common.sh@10 -- # set +x 00:05:21.932 ************************************ 00:05:21.932 START TEST alias_rpc 00:05:21.932 ************************************ 00:05:21.932 11:21:21 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:21.932 * Looking for test storage... 00:05:22.193 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:22.193 11:21:21 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:22.193 11:21:21 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:22.193 11:21:21 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:22.193 11:21:21 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:22.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:22.194 11:21:21 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:22.194 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:22.194 --rc genhtml_branch_coverage=1 00:05:22.194 --rc genhtml_function_coverage=1 00:05:22.194 --rc genhtml_legend=1 00:05:22.194 --rc geninfo_all_blocks=1 00:05:22.194 --rc geninfo_unexecuted_blocks=1 00:05:22.194 00:05:22.194 ' 00:05:22.194 11:21:21 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:22.194 11:21:21 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57862 00:05:22.194 11:21:21 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57862 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 57862 ']' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:22.194 11:21:21 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.194 11:21:21 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.194 [2024-11-05 11:21:21.376859] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:22.194 [2024-11-05 11:21:21.377243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57862 ] 00:05:22.455 [2024-11-05 11:21:21.541097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.455 [2024-11-05 11:21:21.653263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:23.399 11:21:22 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:23.399 11:21:22 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57862 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 57862 ']' 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 57862 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57862 00:05:23.399 killing process with pid 57862 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57862' 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@971 -- # kill 57862 00:05:23.399 11:21:22 alias_rpc -- common/autotest_common.sh@976 -- # wait 57862 00:05:25.322 ************************************ 00:05:25.322 END TEST alias_rpc 00:05:25.322 ************************************ 00:05:25.322 00:05:25.322 real 0m3.125s 00:05:25.322 user 0m3.150s 00:05:25.322 sys 0m0.514s 00:05:25.322 11:21:24 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:25.322 11:21:24 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.322 11:21:24 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:25.322 11:21:24 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:25.322 11:21:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:25.322 11:21:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:25.322 11:21:24 -- common/autotest_common.sh@10 -- # set +x 00:05:25.322 ************************************ 00:05:25.322 START TEST spdkcli_tcp 00:05:25.322 ************************************ 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:25.322 * Looking for test storage... 00:05:25.322 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:25.322 11:21:24 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.322 --rc genhtml_branch_coverage=1 00:05:25.322 --rc genhtml_function_coverage=1 00:05:25.322 --rc genhtml_legend=1 00:05:25.322 --rc geninfo_all_blocks=1 00:05:25.322 --rc geninfo_unexecuted_blocks=1 00:05:25.322 00:05:25.322 ' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.322 --rc genhtml_branch_coverage=1 00:05:25.322 --rc genhtml_function_coverage=1 00:05:25.322 --rc genhtml_legend=1 00:05:25.322 --rc geninfo_all_blocks=1 00:05:25.322 --rc geninfo_unexecuted_blocks=1 00:05:25.322 00:05:25.322 ' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.322 --rc genhtml_branch_coverage=1 00:05:25.322 --rc genhtml_function_coverage=1 00:05:25.322 --rc genhtml_legend=1 00:05:25.322 --rc geninfo_all_blocks=1 00:05:25.322 --rc geninfo_unexecuted_blocks=1 00:05:25.322 00:05:25.322 ' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:25.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:25.322 --rc genhtml_branch_coverage=1 00:05:25.322 --rc genhtml_function_coverage=1 00:05:25.322 --rc genhtml_legend=1 00:05:25.322 --rc geninfo_all_blocks=1 00:05:25.322 --rc geninfo_unexecuted_blocks=1 00:05:25.322 00:05:25.322 ' 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57961 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57961 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 57961 ']' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:25.322 11:21:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:25.322 11:21:24 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:25.584 [2024-11-05 11:21:24.599913] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:25.584 [2024-11-05 11:21:24.600062] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57961 ] 00:05:25.584 [2024-11-05 11:21:24.767460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.844 [2024-11-05 11:21:24.908964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:25.844 [2024-11-05 11:21:24.909157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.417 11:21:25 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:26.417 11:21:25 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:26.417 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57978 00:05:26.417 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:26.417 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:26.678 [ 00:05:26.678 "bdev_malloc_delete", 00:05:26.678 "bdev_malloc_create", 00:05:26.678 "bdev_null_resize", 00:05:26.678 "bdev_null_delete", 00:05:26.678 "bdev_null_create", 00:05:26.678 "bdev_nvme_cuse_unregister", 00:05:26.678 "bdev_nvme_cuse_register", 00:05:26.678 "bdev_opal_new_user", 00:05:26.678 "bdev_opal_set_lock_state", 00:05:26.678 "bdev_opal_delete", 00:05:26.678 "bdev_opal_get_info", 00:05:26.678 "bdev_opal_create", 00:05:26.678 "bdev_nvme_opal_revert", 00:05:26.678 "bdev_nvme_opal_init", 00:05:26.678 "bdev_nvme_send_cmd", 00:05:26.678 "bdev_nvme_set_keys", 00:05:26.678 "bdev_nvme_get_path_iostat", 00:05:26.678 "bdev_nvme_get_mdns_discovery_info", 00:05:26.678 "bdev_nvme_stop_mdns_discovery", 00:05:26.678 "bdev_nvme_start_mdns_discovery", 00:05:26.678 "bdev_nvme_set_multipath_policy", 00:05:26.678 "bdev_nvme_set_preferred_path", 00:05:26.678 "bdev_nvme_get_io_paths", 00:05:26.678 "bdev_nvme_remove_error_injection", 00:05:26.678 "bdev_nvme_add_error_injection", 00:05:26.678 "bdev_nvme_get_discovery_info", 00:05:26.678 "bdev_nvme_stop_discovery", 00:05:26.678 "bdev_nvme_start_discovery", 00:05:26.678 "bdev_nvme_get_controller_health_info", 00:05:26.678 "bdev_nvme_disable_controller", 00:05:26.678 "bdev_nvme_enable_controller", 00:05:26.678 "bdev_nvme_reset_controller", 00:05:26.678 "bdev_nvme_get_transport_statistics", 00:05:26.678 "bdev_nvme_apply_firmware", 00:05:26.678 "bdev_nvme_detach_controller", 00:05:26.678 "bdev_nvme_get_controllers", 00:05:26.678 "bdev_nvme_attach_controller", 00:05:26.678 "bdev_nvme_set_hotplug", 00:05:26.678 "bdev_nvme_set_options", 00:05:26.678 "bdev_passthru_delete", 00:05:26.679 "bdev_passthru_create", 00:05:26.679 "bdev_lvol_set_parent_bdev", 00:05:26.679 "bdev_lvol_set_parent", 00:05:26.679 "bdev_lvol_check_shallow_copy", 00:05:26.679 "bdev_lvol_start_shallow_copy", 00:05:26.679 "bdev_lvol_grow_lvstore", 00:05:26.679 "bdev_lvol_get_lvols", 00:05:26.679 "bdev_lvol_get_lvstores", 00:05:26.679 "bdev_lvol_delete", 00:05:26.679 "bdev_lvol_set_read_only", 00:05:26.679 "bdev_lvol_resize", 00:05:26.679 "bdev_lvol_decouple_parent", 00:05:26.679 "bdev_lvol_inflate", 00:05:26.679 "bdev_lvol_rename", 00:05:26.679 "bdev_lvol_clone_bdev", 00:05:26.679 "bdev_lvol_clone", 00:05:26.679 "bdev_lvol_snapshot", 00:05:26.679 "bdev_lvol_create", 00:05:26.679 "bdev_lvol_delete_lvstore", 00:05:26.679 "bdev_lvol_rename_lvstore", 00:05:26.679 "bdev_lvol_create_lvstore", 00:05:26.679 "bdev_raid_set_options", 00:05:26.679 "bdev_raid_remove_base_bdev", 00:05:26.679 "bdev_raid_add_base_bdev", 00:05:26.679 "bdev_raid_delete", 00:05:26.679 "bdev_raid_create", 00:05:26.679 "bdev_raid_get_bdevs", 00:05:26.679 "bdev_error_inject_error", 00:05:26.679 "bdev_error_delete", 00:05:26.679 "bdev_error_create", 00:05:26.679 "bdev_split_delete", 00:05:26.679 "bdev_split_create", 00:05:26.679 "bdev_delay_delete", 00:05:26.679 "bdev_delay_create", 00:05:26.679 "bdev_delay_update_latency", 00:05:26.679 "bdev_zone_block_delete", 00:05:26.679 "bdev_zone_block_create", 00:05:26.679 "blobfs_create", 00:05:26.679 "blobfs_detect", 00:05:26.679 "blobfs_set_cache_size", 00:05:26.679 "bdev_xnvme_delete", 00:05:26.679 "bdev_xnvme_create", 00:05:26.679 "bdev_aio_delete", 00:05:26.679 "bdev_aio_rescan", 00:05:26.679 "bdev_aio_create", 00:05:26.679 "bdev_ftl_set_property", 00:05:26.679 "bdev_ftl_get_properties", 00:05:26.679 "bdev_ftl_get_stats", 00:05:26.679 "bdev_ftl_unmap", 00:05:26.679 "bdev_ftl_unload", 00:05:26.679 "bdev_ftl_delete", 00:05:26.679 "bdev_ftl_load", 00:05:26.679 "bdev_ftl_create", 00:05:26.679 "bdev_virtio_attach_controller", 00:05:26.679 "bdev_virtio_scsi_get_devices", 00:05:26.679 "bdev_virtio_detach_controller", 00:05:26.679 "bdev_virtio_blk_set_hotplug", 00:05:26.679 "bdev_iscsi_delete", 00:05:26.679 "bdev_iscsi_create", 00:05:26.679 "bdev_iscsi_set_options", 00:05:26.679 "accel_error_inject_error", 00:05:26.679 "ioat_scan_accel_module", 00:05:26.679 "dsa_scan_accel_module", 00:05:26.679 "iaa_scan_accel_module", 00:05:26.679 "keyring_file_remove_key", 00:05:26.679 "keyring_file_add_key", 00:05:26.679 "keyring_linux_set_options", 00:05:26.679 "fsdev_aio_delete", 00:05:26.679 "fsdev_aio_create", 00:05:26.679 "iscsi_get_histogram", 00:05:26.679 "iscsi_enable_histogram", 00:05:26.679 "iscsi_set_options", 00:05:26.679 "iscsi_get_auth_groups", 00:05:26.679 "iscsi_auth_group_remove_secret", 00:05:26.679 "iscsi_auth_group_add_secret", 00:05:26.679 "iscsi_delete_auth_group", 00:05:26.679 "iscsi_create_auth_group", 00:05:26.679 "iscsi_set_discovery_auth", 00:05:26.679 "iscsi_get_options", 00:05:26.679 "iscsi_target_node_request_logout", 00:05:26.679 "iscsi_target_node_set_redirect", 00:05:26.679 "iscsi_target_node_set_auth", 00:05:26.679 "iscsi_target_node_add_lun", 00:05:26.679 "iscsi_get_stats", 00:05:26.679 "iscsi_get_connections", 00:05:26.679 "iscsi_portal_group_set_auth", 00:05:26.679 "iscsi_start_portal_group", 00:05:26.679 "iscsi_delete_portal_group", 00:05:26.679 "iscsi_create_portal_group", 00:05:26.679 "iscsi_get_portal_groups", 00:05:26.679 "iscsi_delete_target_node", 00:05:26.679 "iscsi_target_node_remove_pg_ig_maps", 00:05:26.679 "iscsi_target_node_add_pg_ig_maps", 00:05:26.679 "iscsi_create_target_node", 00:05:26.679 "iscsi_get_target_nodes", 00:05:26.679 "iscsi_delete_initiator_group", 00:05:26.679 "iscsi_initiator_group_remove_initiators", 00:05:26.679 "iscsi_initiator_group_add_initiators", 00:05:26.679 "iscsi_create_initiator_group", 00:05:26.679 "iscsi_get_initiator_groups", 00:05:26.679 "nvmf_set_crdt", 00:05:26.679 "nvmf_set_config", 00:05:26.679 "nvmf_set_max_subsystems", 00:05:26.679 "nvmf_stop_mdns_prr", 00:05:26.679 "nvmf_publish_mdns_prr", 00:05:26.679 "nvmf_subsystem_get_listeners", 00:05:26.679 "nvmf_subsystem_get_qpairs", 00:05:26.679 "nvmf_subsystem_get_controllers", 00:05:26.679 "nvmf_get_stats", 00:05:26.679 "nvmf_get_transports", 00:05:26.679 "nvmf_create_transport", 00:05:26.679 "nvmf_get_targets", 00:05:26.679 "nvmf_delete_target", 00:05:26.679 "nvmf_create_target", 00:05:26.679 "nvmf_subsystem_allow_any_host", 00:05:26.679 "nvmf_subsystem_set_keys", 00:05:26.679 "nvmf_subsystem_remove_host", 00:05:26.679 "nvmf_subsystem_add_host", 00:05:26.679 "nvmf_ns_remove_host", 00:05:26.679 "nvmf_ns_add_host", 00:05:26.679 "nvmf_subsystem_remove_ns", 00:05:26.679 "nvmf_subsystem_set_ns_ana_group", 00:05:26.679 "nvmf_subsystem_add_ns", 00:05:26.679 "nvmf_subsystem_listener_set_ana_state", 00:05:26.679 "nvmf_discovery_get_referrals", 00:05:26.679 "nvmf_discovery_remove_referral", 00:05:26.679 "nvmf_discovery_add_referral", 00:05:26.679 "nvmf_subsystem_remove_listener", 00:05:26.679 "nvmf_subsystem_add_listener", 00:05:26.679 "nvmf_delete_subsystem", 00:05:26.679 "nvmf_create_subsystem", 00:05:26.679 "nvmf_get_subsystems", 00:05:26.679 "env_dpdk_get_mem_stats", 00:05:26.679 "nbd_get_disks", 00:05:26.679 "nbd_stop_disk", 00:05:26.679 "nbd_start_disk", 00:05:26.679 "ublk_recover_disk", 00:05:26.679 "ublk_get_disks", 00:05:26.679 "ublk_stop_disk", 00:05:26.679 "ublk_start_disk", 00:05:26.679 "ublk_destroy_target", 00:05:26.679 "ublk_create_target", 00:05:26.679 "virtio_blk_create_transport", 00:05:26.679 "virtio_blk_get_transports", 00:05:26.679 "vhost_controller_set_coalescing", 00:05:26.679 "vhost_get_controllers", 00:05:26.679 "vhost_delete_controller", 00:05:26.679 "vhost_create_blk_controller", 00:05:26.679 "vhost_scsi_controller_remove_target", 00:05:26.679 "vhost_scsi_controller_add_target", 00:05:26.679 "vhost_start_scsi_controller", 00:05:26.679 "vhost_create_scsi_controller", 00:05:26.679 "thread_set_cpumask", 00:05:26.679 "scheduler_set_options", 00:05:26.679 "framework_get_governor", 00:05:26.679 "framework_get_scheduler", 00:05:26.679 "framework_set_scheduler", 00:05:26.679 "framework_get_reactors", 00:05:26.679 "thread_get_io_channels", 00:05:26.679 "thread_get_pollers", 00:05:26.679 "thread_get_stats", 00:05:26.679 "framework_monitor_context_switch", 00:05:26.679 "spdk_kill_instance", 00:05:26.679 "log_enable_timestamps", 00:05:26.679 "log_get_flags", 00:05:26.679 "log_clear_flag", 00:05:26.679 "log_set_flag", 00:05:26.679 "log_get_level", 00:05:26.679 "log_set_level", 00:05:26.679 "log_get_print_level", 00:05:26.679 "log_set_print_level", 00:05:26.679 "framework_enable_cpumask_locks", 00:05:26.679 "framework_disable_cpumask_locks", 00:05:26.679 "framework_wait_init", 00:05:26.679 "framework_start_init", 00:05:26.679 "scsi_get_devices", 00:05:26.679 "bdev_get_histogram", 00:05:26.679 "bdev_enable_histogram", 00:05:26.679 "bdev_set_qos_limit", 00:05:26.679 "bdev_set_qd_sampling_period", 00:05:26.679 "bdev_get_bdevs", 00:05:26.679 "bdev_reset_iostat", 00:05:26.679 "bdev_get_iostat", 00:05:26.679 "bdev_examine", 00:05:26.679 "bdev_wait_for_examine", 00:05:26.679 "bdev_set_options", 00:05:26.679 "accel_get_stats", 00:05:26.679 "accel_set_options", 00:05:26.679 "accel_set_driver", 00:05:26.679 "accel_crypto_key_destroy", 00:05:26.679 "accel_crypto_keys_get", 00:05:26.679 "accel_crypto_key_create", 00:05:26.679 "accel_assign_opc", 00:05:26.679 "accel_get_module_info", 00:05:26.679 "accel_get_opc_assignments", 00:05:26.679 "vmd_rescan", 00:05:26.679 "vmd_remove_device", 00:05:26.679 "vmd_enable", 00:05:26.679 "sock_get_default_impl", 00:05:26.679 "sock_set_default_impl", 00:05:26.679 "sock_impl_set_options", 00:05:26.679 "sock_impl_get_options", 00:05:26.679 "iobuf_get_stats", 00:05:26.679 "iobuf_set_options", 00:05:26.679 "keyring_get_keys", 00:05:26.679 "framework_get_pci_devices", 00:05:26.679 "framework_get_config", 00:05:26.679 "framework_get_subsystems", 00:05:26.679 "fsdev_set_opts", 00:05:26.679 "fsdev_get_opts", 00:05:26.679 "trace_get_info", 00:05:26.679 "trace_get_tpoint_group_mask", 00:05:26.679 "trace_disable_tpoint_group", 00:05:26.679 "trace_enable_tpoint_group", 00:05:26.679 "trace_clear_tpoint_mask", 00:05:26.679 "trace_set_tpoint_mask", 00:05:26.679 "notify_get_notifications", 00:05:26.679 "notify_get_types", 00:05:26.679 "spdk_get_version", 00:05:26.679 "rpc_get_methods" 00:05:26.679 ] 00:05:26.679 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:26.679 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:26.679 11:21:25 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57961 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 57961 ']' 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 57961 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:26.679 11:21:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57961 00:05:26.940 11:21:25 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:26.940 11:21:25 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:26.940 11:21:25 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57961' 00:05:26.940 killing process with pid 57961 00:05:26.940 11:21:25 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 57961 00:05:26.940 11:21:25 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 57961 00:05:28.862 ************************************ 00:05:28.862 END TEST spdkcli_tcp 00:05:28.862 ************************************ 00:05:28.862 00:05:28.862 real 0m3.276s 00:05:28.862 user 0m5.732s 00:05:28.862 sys 0m0.614s 00:05:28.862 11:21:27 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:28.862 11:21:27 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:28.862 11:21:27 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.862 11:21:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:28.862 11:21:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:28.862 11:21:27 -- common/autotest_common.sh@10 -- # set +x 00:05:28.862 ************************************ 00:05:28.862 START TEST dpdk_mem_utility 00:05:28.862 ************************************ 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:28.862 * Looking for test storage... 00:05:28.862 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:28.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.862 11:21:27 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:28.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.862 --rc genhtml_branch_coverage=1 00:05:28.862 --rc genhtml_function_coverage=1 00:05:28.862 --rc genhtml_legend=1 00:05:28.862 --rc geninfo_all_blocks=1 00:05:28.862 --rc geninfo_unexecuted_blocks=1 00:05:28.862 00:05:28.862 ' 00:05:28.862 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:28.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.862 --rc genhtml_branch_coverage=1 00:05:28.862 --rc genhtml_function_coverage=1 00:05:28.863 --rc genhtml_legend=1 00:05:28.863 --rc geninfo_all_blocks=1 00:05:28.863 --rc geninfo_unexecuted_blocks=1 00:05:28.863 00:05:28.863 ' 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.863 --rc genhtml_branch_coverage=1 00:05:28.863 --rc genhtml_function_coverage=1 00:05:28.863 --rc genhtml_legend=1 00:05:28.863 --rc geninfo_all_blocks=1 00:05:28.863 --rc geninfo_unexecuted_blocks=1 00:05:28.863 00:05:28.863 ' 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:28.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.863 --rc genhtml_branch_coverage=1 00:05:28.863 --rc genhtml_function_coverage=1 00:05:28.863 --rc genhtml_legend=1 00:05:28.863 --rc geninfo_all_blocks=1 00:05:28.863 --rc geninfo_unexecuted_blocks=1 00:05:28.863 00:05:28.863 ' 00:05:28.863 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:28.863 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58072 00:05:28.863 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58072 00:05:28.863 11:21:27 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 58072 ']' 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:28.863 11:21:27 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:28.863 [2024-11-05 11:21:27.916070] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:28.863 [2024-11-05 11:21:27.916469] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58072 ] 00:05:28.863 [2024-11-05 11:21:28.083973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.124 [2024-11-05 11:21:28.219627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.695 11:21:28 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:29.695 11:21:28 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:29.695 11:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:29.695 11:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:29.695 11:21:28 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:29.695 11:21:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:29.695 { 00:05:29.695 "filename": "/tmp/spdk_mem_dump.txt" 00:05:29.695 } 00:05:29.695 11:21:28 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:29.695 11:21:28 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:29.955 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:29.955 1 heaps totaling size 816.000000 MiB 00:05:29.955 size: 816.000000 MiB heap id: 0 00:05:29.955 end heaps---------- 00:05:29.955 9 mempools totaling size 595.772034 MiB 00:05:29.955 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:29.955 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:29.955 size: 92.545471 MiB name: bdev_io_58072 00:05:29.955 size: 50.003479 MiB name: msgpool_58072 00:05:29.955 size: 36.509338 MiB name: fsdev_io_58072 00:05:29.955 size: 21.763794 MiB name: PDU_Pool 00:05:29.955 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:29.955 size: 4.133484 MiB name: evtpool_58072 00:05:29.955 size: 0.026123 MiB name: Session_Pool 00:05:29.955 end mempools------- 00:05:29.955 6 memzones totaling size 4.142822 MiB 00:05:29.955 size: 1.000366 MiB name: RG_ring_0_58072 00:05:29.955 size: 1.000366 MiB name: RG_ring_1_58072 00:05:29.955 size: 1.000366 MiB name: RG_ring_4_58072 00:05:29.955 size: 1.000366 MiB name: RG_ring_5_58072 00:05:29.955 size: 0.125366 MiB name: RG_ring_2_58072 00:05:29.955 size: 0.015991 MiB name: RG_ring_3_58072 00:05:29.955 end memzones------- 00:05:29.955 11:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:29.955 heap id: 0 total size: 816.000000 MiB number of busy elements: 322 number of free elements: 18 00:05:29.955 list of free elements. size: 16.789673 MiB 00:05:29.955 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:29.955 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:29.955 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:29.955 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:29.955 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:29.955 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:29.955 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:29.955 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:29.955 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:29.955 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:29.955 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:29.955 element at address: 0x20001ac00000 with size: 0.558777 MiB 00:05:29.955 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:29.955 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:29.955 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:29.955 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:29.955 element at address: 0x200028000000 with size: 0.391663 MiB 00:05:29.955 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:29.955 list of standard malloc elements. size: 199.289429 MiB 00:05:29.955 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:29.955 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:29.955 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:29.955 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:29.955 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:29.955 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:29.955 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:29.955 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:29.955 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:29.955 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:29.955 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:29.955 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:29.955 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:29.955 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:29.956 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:29.956 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f0c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f1c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f2c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f3c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f4c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f5c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f6c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f7c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f8c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:29.956 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:29.957 element at address: 0x200028064440 with size: 0.000244 MiB 00:05:29.957 element at address: 0x200028064540 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b200 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:29.957 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:29.957 list of memzone associated elements. size: 599.920898 MiB 00:05:29.957 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:29.957 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:29.957 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:29.957 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:29.957 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:29.957 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58072_0 00:05:29.957 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:29.957 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58072_0 00:05:29.957 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:29.957 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58072_0 00:05:29.957 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:29.957 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:29.957 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:29.957 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:29.957 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:29.957 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58072_0 00:05:29.957 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:29.957 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58072 00:05:29.957 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:29.957 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58072 00:05:29.957 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:29.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:29.957 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:29.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:29.957 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:29.957 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:29.957 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:29.957 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:29.957 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:29.957 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58072 00:05:29.957 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:29.957 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58072 00:05:29.957 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:29.957 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58072 00:05:29.957 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:29.957 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58072 00:05:29.957 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:29.957 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58072 00:05:29.957 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:29.957 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58072 00:05:29.957 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:29.957 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:29.957 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:29.957 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:29.957 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:29.957 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:29.957 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:29.957 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58072 00:05:29.957 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:29.957 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58072 00:05:29.957 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:29.957 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:29.957 element at address: 0x200028064640 with size: 0.023804 MiB 00:05:29.957 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:29.957 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:29.957 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58072 00:05:29.957 element at address: 0x20002806a7c0 with size: 0.002502 MiB 00:05:29.957 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:29.958 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:29.958 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58072 00:05:29.958 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:29.958 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58072 00:05:29.958 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:29.958 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58072 00:05:29.958 element at address: 0x20002806b300 with size: 0.000366 MiB 00:05:29.958 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:29.958 11:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:29.958 11:21:29 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58072 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 58072 ']' 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 58072 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58072 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58072' 00:05:29.958 killing process with pid 58072 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 58072 00:05:29.958 11:21:29 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 58072 00:05:31.870 00:05:31.870 real 0m2.955s 00:05:31.870 user 0m2.891s 00:05:31.870 sys 0m0.523s 00:05:31.870 ************************************ 00:05:31.870 END TEST dpdk_mem_utility 00:05:31.870 11:21:30 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:31.870 11:21:30 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.870 ************************************ 00:05:31.870 11:21:30 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.870 11:21:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:31.870 11:21:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.870 11:21:30 -- common/autotest_common.sh@10 -- # set +x 00:05:31.870 ************************************ 00:05:31.870 START TEST event 00:05:31.870 ************************************ 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:31.870 * Looking for test storage... 00:05:31.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:31.870 11:21:30 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.870 11:21:30 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.870 11:21:30 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.870 11:21:30 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.870 11:21:30 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.870 11:21:30 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.870 11:21:30 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.870 11:21:30 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.870 11:21:30 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.870 11:21:30 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.870 11:21:30 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.870 11:21:30 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.870 11:21:30 event -- scripts/common.sh@345 -- # : 1 00:05:31.870 11:21:30 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.870 11:21:30 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.870 11:21:30 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.870 11:21:30 event -- scripts/common.sh@353 -- # local d=1 00:05:31.870 11:21:30 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.870 11:21:30 event -- scripts/common.sh@355 -- # echo 1 00:05:31.870 11:21:30 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.870 11:21:30 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.870 11:21:30 event -- scripts/common.sh@353 -- # local d=2 00:05:31.870 11:21:30 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.870 11:21:30 event -- scripts/common.sh@355 -- # echo 2 00:05:31.870 11:21:30 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.870 11:21:30 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.870 11:21:30 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.870 11:21:30 event -- scripts/common.sh@368 -- # return 0 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:31.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.870 --rc genhtml_branch_coverage=1 00:05:31.870 --rc genhtml_function_coverage=1 00:05:31.870 --rc genhtml_legend=1 00:05:31.870 --rc geninfo_all_blocks=1 00:05:31.870 --rc geninfo_unexecuted_blocks=1 00:05:31.870 00:05:31.870 ' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:31.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.870 --rc genhtml_branch_coverage=1 00:05:31.870 --rc genhtml_function_coverage=1 00:05:31.870 --rc genhtml_legend=1 00:05:31.870 --rc geninfo_all_blocks=1 00:05:31.870 --rc geninfo_unexecuted_blocks=1 00:05:31.870 00:05:31.870 ' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:31.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.870 --rc genhtml_branch_coverage=1 00:05:31.870 --rc genhtml_function_coverage=1 00:05:31.870 --rc genhtml_legend=1 00:05:31.870 --rc geninfo_all_blocks=1 00:05:31.870 --rc geninfo_unexecuted_blocks=1 00:05:31.870 00:05:31.870 ' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:31.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.870 --rc genhtml_branch_coverage=1 00:05:31.870 --rc genhtml_function_coverage=1 00:05:31.870 --rc genhtml_legend=1 00:05:31.870 --rc geninfo_all_blocks=1 00:05:31.870 --rc geninfo_unexecuted_blocks=1 00:05:31.870 00:05:31.870 ' 00:05:31.870 11:21:30 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:31.870 11:21:30 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.870 11:21:30 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:31.870 11:21:30 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:31.870 11:21:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.870 ************************************ 00:05:31.870 START TEST event_perf 00:05:31.870 ************************************ 00:05:31.870 11:21:30 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.870 Running I/O for 1 seconds...[2024-11-05 11:21:30.880135] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:31.870 [2024-11-05 11:21:30.880333] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58169 ] 00:05:31.870 [2024-11-05 11:21:31.041033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:32.131 [2024-11-05 11:21:31.151643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.131 [2024-11-05 11:21:31.151927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:32.131 [2024-11-05 11:21:31.152541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.131 Running I/O for 1 seconds...[2024-11-05 11:21:31.152672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.070 00:05:33.070 lcore 0: 185878 00:05:33.070 lcore 1: 185877 00:05:33.070 lcore 2: 185877 00:05:33.070 lcore 3: 185879 00:05:33.070 done. 00:05:33.070 00:05:33.070 real 0m1.478s 00:05:33.070 user 0m4.250s 00:05:33.070 sys 0m0.105s 00:05:33.070 11:21:32 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:33.070 11:21:32 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:33.070 ************************************ 00:05:33.070 END TEST event_perf 00:05:33.070 ************************************ 00:05:33.331 11:21:32 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.331 11:21:32 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:33.331 11:21:32 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:33.331 11:21:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:33.331 ************************************ 00:05:33.331 START TEST event_reactor 00:05:33.331 ************************************ 00:05:33.331 11:21:32 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:33.331 [2024-11-05 11:21:32.413487] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:33.331 [2024-11-05 11:21:32.413706] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58209 ] 00:05:33.331 [2024-11-05 11:21:32.573248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.590 [2024-11-05 11:21:32.668875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.972 test_start 00:05:34.972 oneshot 00:05:34.972 tick 100 00:05:34.972 tick 100 00:05:34.972 tick 250 00:05:34.972 tick 100 00:05:34.972 tick 100 00:05:34.972 tick 100 00:05:34.972 tick 250 00:05:34.972 tick 500 00:05:34.972 tick 100 00:05:34.972 tick 100 00:05:34.972 tick 250 00:05:34.972 tick 100 00:05:34.972 tick 100 00:05:34.972 test_end 00:05:34.972 ************************************ 00:05:34.972 END TEST event_reactor 00:05:34.972 ************************************ 00:05:34.972 00:05:34.972 real 0m1.437s 00:05:34.972 user 0m1.258s 00:05:34.972 sys 0m0.069s 00:05:34.972 11:21:33 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.972 11:21:33 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:34.972 11:21:33 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.972 11:21:33 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:34.972 11:21:33 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.972 11:21:33 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.972 ************************************ 00:05:34.972 START TEST event_reactor_perf 00:05:34.972 ************************************ 00:05:34.973 11:21:33 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.973 [2024-11-05 11:21:33.910621] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:34.973 [2024-11-05 11:21:33.910730] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58245 ] 00:05:34.973 [2024-11-05 11:21:34.069445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.973 [2024-11-05 11:21:34.171877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.356 test_start 00:05:36.356 test_end 00:05:36.356 Performance: 311778 events per second 00:05:36.356 00:05:36.356 real 0m1.442s 00:05:36.356 user 0m1.274s 00:05:36.356 sys 0m0.060s 00:05:36.356 11:21:35 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:36.356 11:21:35 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:36.356 ************************************ 00:05:36.356 END TEST event_reactor_perf 00:05:36.356 ************************************ 00:05:36.356 11:21:35 event -- event/event.sh@49 -- # uname -s 00:05:36.356 11:21:35 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:36.356 11:21:35 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.356 11:21:35 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:36.356 11:21:35 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:36.356 11:21:35 event -- common/autotest_common.sh@10 -- # set +x 00:05:36.356 ************************************ 00:05:36.356 START TEST event_scheduler 00:05:36.356 ************************************ 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:36.356 * Looking for test storage... 00:05:36.356 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:36.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:36.356 11:21:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.356 --rc genhtml_branch_coverage=1 00:05:36.356 --rc genhtml_function_coverage=1 00:05:36.356 --rc genhtml_legend=1 00:05:36.356 --rc geninfo_all_blocks=1 00:05:36.356 --rc geninfo_unexecuted_blocks=1 00:05:36.356 00:05:36.356 ' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.356 --rc genhtml_branch_coverage=1 00:05:36.356 --rc genhtml_function_coverage=1 00:05:36.356 --rc genhtml_legend=1 00:05:36.356 --rc geninfo_all_blocks=1 00:05:36.356 --rc geninfo_unexecuted_blocks=1 00:05:36.356 00:05:36.356 ' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.356 --rc genhtml_branch_coverage=1 00:05:36.356 --rc genhtml_function_coverage=1 00:05:36.356 --rc genhtml_legend=1 00:05:36.356 --rc geninfo_all_blocks=1 00:05:36.356 --rc geninfo_unexecuted_blocks=1 00:05:36.356 00:05:36.356 ' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:36.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:36.356 --rc genhtml_branch_coverage=1 00:05:36.356 --rc genhtml_function_coverage=1 00:05:36.356 --rc genhtml_legend=1 00:05:36.356 --rc geninfo_all_blocks=1 00:05:36.356 --rc geninfo_unexecuted_blocks=1 00:05:36.356 00:05:36.356 ' 00:05:36.356 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:36.356 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58316 00:05:36.356 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.356 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58316 00:05:36.356 11:21:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 58316 ']' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:36.356 11:21:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:36.356 [2024-11-05 11:21:35.575763] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:36.356 [2024-11-05 11:21:35.575938] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58316 ] 00:05:36.617 [2024-11-05 11:21:35.750204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.617 [2024-11-05 11:21:35.856390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.617 [2024-11-05 11:21:35.856613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.617 [2024-11-05 11:21:35.856881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.617 [2024-11-05 11:21:35.856951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:37.243 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.243 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.243 POWER: Cannot set governor of lcore 0 to performance 00:05:37.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.243 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.243 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:37.243 POWER: Cannot set governor of lcore 0 to userspace 00:05:37.243 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:37.243 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:37.243 POWER: Unable to set Power Management Environment for lcore 0 00:05:37.243 [2024-11-05 11:21:36.402092] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:37.243 [2024-11-05 11:21:36.402110] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:37.243 [2024-11-05 11:21:36.402119] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:37.243 [2024-11-05 11:21:36.402136] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:37.243 [2024-11-05 11:21:36.402144] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:37.243 [2024-11-05 11:21:36.402152] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.243 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.243 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 [2024-11-05 11:21:36.624221] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:37.503 11:21:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.503 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:37.503 11:21:36 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:37.503 11:21:36 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:37.503 11:21:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 ************************************ 00:05:37.503 START TEST scheduler_create_thread 00:05:37.503 ************************************ 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 2 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 3 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 4 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.503 5 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.503 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 6 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 7 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 8 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 9 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 10 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 ************************************ 00:05:37.504 END TEST scheduler_create_thread 00:05:37.504 ************************************ 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.504 00:05:37.504 real 0m0.111s 00:05:37.504 user 0m0.012s 00:05:37.504 sys 0m0.006s 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:37.504 11:21:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.504 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:37.504 11:21:36 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58316 00:05:37.504 11:21:36 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 58316 ']' 00:05:37.504 11:21:36 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 58316 00:05:37.504 11:21:36 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:37.504 11:21:36 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58316 00:05:37.870 killing process with pid 58316 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58316' 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 58316 00:05:37.870 11:21:36 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 58316 00:05:38.131 [2024-11-05 11:21:37.229059] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:38.704 ************************************ 00:05:38.704 END TEST event_scheduler 00:05:38.704 ************************************ 00:05:38.704 00:05:38.704 real 0m2.577s 00:05:38.704 user 0m4.268s 00:05:38.704 sys 0m0.345s 00:05:38.704 11:21:37 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:38.704 11:21:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:38.704 11:21:37 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:38.965 11:21:37 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:38.965 11:21:37 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:38.965 11:21:37 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:38.965 11:21:37 event -- common/autotest_common.sh@10 -- # set +x 00:05:38.965 ************************************ 00:05:38.965 START TEST app_repeat 00:05:38.965 ************************************ 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:38.965 Process app_repeat pid: 58394 00:05:38.965 spdk_app_start Round 0 00:05:38.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58394 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58394' 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58394 /var/tmp/spdk-nbd.sock 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58394 ']' 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:38.965 11:21:37 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:38.965 11:21:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:38.965 [2024-11-05 11:21:38.033155] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:38.965 [2024-11-05 11:21:38.033376] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58394 ] 00:05:38.965 [2024-11-05 11:21:38.192677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.222 [2024-11-05 11:21:38.294237] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.222 [2024-11-05 11:21:38.294249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.788 11:21:38 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:39.788 11:21:38 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:39.788 11:21:38 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.046 Malloc0 00:05:40.046 11:21:39 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.305 Malloc1 00:05:40.305 11:21:39 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.305 11:21:39 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.306 11:21:39 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.306 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.306 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.306 11:21:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.571 /dev/nbd0 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.571 1+0 records in 00:05:40.571 1+0 records out 00:05:40.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521782 s, 7.9 MB/s 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.571 /dev/nbd1 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.571 11:21:39 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.571 1+0 records in 00:05:40.571 1+0 records out 00:05:40.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000167773 s, 24.4 MB/s 00:05:40.571 11:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.831 11:21:39 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:40.831 11:21:39 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:40.831 11:21:39 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:40.831 11:21:39 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:40.832 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.832 11:21:39 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.832 11:21:39 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.832 11:21:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.832 11:21:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.832 { 00:05:40.832 "nbd_device": "/dev/nbd0", 00:05:40.832 "bdev_name": "Malloc0" 00:05:40.832 }, 00:05:40.832 { 00:05:40.832 "nbd_device": "/dev/nbd1", 00:05:40.832 "bdev_name": "Malloc1" 00:05:40.832 } 00:05:40.832 ]' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.832 { 00:05:40.832 "nbd_device": "/dev/nbd0", 00:05:40.832 "bdev_name": "Malloc0" 00:05:40.832 }, 00:05:40.832 { 00:05:40.832 "nbd_device": "/dev/nbd1", 00:05:40.832 "bdev_name": "Malloc1" 00:05:40.832 } 00:05:40.832 ]' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.832 /dev/nbd1' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.832 /dev/nbd1' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.832 11:21:40 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:41.092 256+0 records in 00:05:41.092 256+0 records out 00:05:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00762366 s, 138 MB/s 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:41.092 256+0 records in 00:05:41.092 256+0 records out 00:05:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192887 s, 54.4 MB/s 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:41.092 256+0 records in 00:05:41.092 256+0 records out 00:05:41.092 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171154 s, 61.3 MB/s 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.092 11:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.352 11:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.614 11:21:40 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.614 11:21:40 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:42.185 11:21:41 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:42.757 [2024-11-05 11:21:41.838001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:42.757 [2024-11-05 11:21:41.912166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.757 [2024-11-05 11:21:41.912202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.757 [2024-11-05 11:21:42.015875] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:42.757 [2024-11-05 11:21:42.015928] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.303 spdk_app_start Round 1 00:05:45.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.303 11:21:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.303 11:21:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.303 11:21:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58394 /var/tmp/spdk-nbd.sock 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58394 ']' 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:45.303 11:21:44 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:45.303 11:21:44 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.303 Malloc0 00:05:45.564 11:21:44 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.564 Malloc1 00:05:45.564 11:21:44 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.564 11:21:44 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.825 /dev/nbd0 00:05:45.825 11:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.086 1+0 records in 00:05:46.086 1+0 records out 00:05:46.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434431 s, 9.4 MB/s 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.086 /dev/nbd1 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.086 1+0 records in 00:05:46.086 1+0 records out 00:05:46.086 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242447 s, 16.9 MB/s 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:46.086 11:21:45 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.086 11:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.347 { 00:05:46.347 "nbd_device": "/dev/nbd0", 00:05:46.347 "bdev_name": "Malloc0" 00:05:46.347 }, 00:05:46.347 { 00:05:46.347 "nbd_device": "/dev/nbd1", 00:05:46.347 "bdev_name": "Malloc1" 00:05:46.347 } 00:05:46.347 ]' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.347 { 00:05:46.347 "nbd_device": "/dev/nbd0", 00:05:46.347 "bdev_name": "Malloc0" 00:05:46.347 }, 00:05:46.347 { 00:05:46.347 "nbd_device": "/dev/nbd1", 00:05:46.347 "bdev_name": "Malloc1" 00:05:46.347 } 00:05:46.347 ]' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.347 /dev/nbd1' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.347 /dev/nbd1' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.347 256+0 records in 00:05:46.347 256+0 records out 00:05:46.347 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755873 s, 139 MB/s 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.347 11:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.608 256+0 records in 00:05:46.608 256+0 records out 00:05:46.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202064 s, 51.9 MB/s 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.608 256+0 records in 00:05:46.608 256+0 records out 00:05:46.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245243 s, 42.8 MB/s 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.608 11:21:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.870 11:21:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.870 11:21:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.870 11:21:45 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.870 11:21:45 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.870 11:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.130 11:21:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.130 11:21:46 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.389 11:21:46 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:48.328 [2024-11-05 11:21:47.332579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:48.328 [2024-11-05 11:21:47.427096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:48.328 [2024-11-05 11:21:47.427119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.328 [2024-11-05 11:21:47.555813] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:48.328 [2024-11-05 11:21:47.555868] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.872 spdk_app_start Round 2 00:05:50.872 11:21:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.872 11:21:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.872 11:21:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58394 /var/tmp/spdk-nbd.sock 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58394 ']' 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:50.872 11:21:49 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:50.872 11:21:49 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.872 Malloc0 00:05:50.872 11:21:50 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:51.133 Malloc1 00:05:51.133 11:21:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.133 11:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.394 /dev/nbd0 00:05:51.394 11:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.394 11:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.394 1+0 records in 00:05:51.394 1+0 records out 00:05:51.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159573 s, 25.7 MB/s 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.394 11:21:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.394 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.394 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.394 11:21:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.655 /dev/nbd1 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.655 1+0 records in 00:05:51.655 1+0 records out 00:05:51.655 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158087 s, 25.9 MB/s 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:05:51.655 11:21:50 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.655 11:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.915 11:21:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.915 { 00:05:51.915 "nbd_device": "/dev/nbd0", 00:05:51.915 "bdev_name": "Malloc0" 00:05:51.915 }, 00:05:51.915 { 00:05:51.915 "nbd_device": "/dev/nbd1", 00:05:51.915 "bdev_name": "Malloc1" 00:05:51.915 } 00:05:51.915 ]' 00:05:51.915 11:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.915 { 00:05:51.915 "nbd_device": "/dev/nbd0", 00:05:51.915 "bdev_name": "Malloc0" 00:05:51.915 }, 00:05:51.915 { 00:05:51.916 "nbd_device": "/dev/nbd1", 00:05:51.916 "bdev_name": "Malloc1" 00:05:51.916 } 00:05:51.916 ]' 00:05:51.916 11:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.916 11:21:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.916 /dev/nbd1' 00:05:51.916 11:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.916 /dev/nbd1' 00:05:51.916 11:21:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.916 256+0 records in 00:05:51.916 256+0 records out 00:05:51.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00702314 s, 149 MB/s 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.916 256+0 records in 00:05:51.916 256+0 records out 00:05:51.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01544 s, 67.9 MB/s 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.916 256+0 records in 00:05:51.916 256+0 records out 00:05:51.916 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185816 s, 56.4 MB/s 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.916 11:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:52.177 11:21:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.437 11:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.699 11:21:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.699 11:21:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.973 11:21:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:53.541 [2024-11-05 11:21:52.559311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.541 [2024-11-05 11:21:52.640763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.541 [2024-11-05 11:21:52.640761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.541 [2024-11-05 11:21:52.747303] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:53.541 [2024-11-05 11:21:52.747366] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:56.081 11:21:55 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58394 /var/tmp/spdk-nbd.sock 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 58394 ']' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:56.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:05:56.081 11:21:55 event.app_repeat -- event/event.sh@39 -- # killprocess 58394 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 58394 ']' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 58394 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58394 00:05:56.081 killing process with pid 58394 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58394' 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@971 -- # kill 58394 00:05:56.081 11:21:55 event.app_repeat -- common/autotest_common.sh@976 -- # wait 58394 00:05:56.649 spdk_app_start is called in Round 0. 00:05:56.649 Shutdown signal received, stop current app iteration 00:05:56.649 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:05:56.649 spdk_app_start is called in Round 1. 00:05:56.649 Shutdown signal received, stop current app iteration 00:05:56.649 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:05:56.649 spdk_app_start is called in Round 2. 00:05:56.649 Shutdown signal received, stop current app iteration 00:05:56.650 Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 reinitialization... 00:05:56.650 spdk_app_start is called in Round 3. 00:05:56.650 Shutdown signal received, stop current app iteration 00:05:56.650 ************************************ 00:05:56.650 END TEST app_repeat 00:05:56.650 11:21:55 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:56.650 11:21:55 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:56.650 00:05:56.650 real 0m17.811s 00:05:56.650 user 0m38.938s 00:05:56.650 sys 0m2.109s 00:05:56.650 11:21:55 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:56.650 11:21:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:56.650 ************************************ 00:05:56.650 11:21:55 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:56.650 11:21:55 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.650 11:21:55 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.650 11:21:55 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.650 11:21:55 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.650 ************************************ 00:05:56.650 START TEST cpu_locks 00:05:56.650 ************************************ 00:05:56.650 11:21:55 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:56.650 * Looking for test storage... 00:05:56.650 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.910 11:21:55 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:56.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.910 --rc genhtml_branch_coverage=1 00:05:56.910 --rc genhtml_function_coverage=1 00:05:56.910 --rc genhtml_legend=1 00:05:56.910 --rc geninfo_all_blocks=1 00:05:56.910 --rc geninfo_unexecuted_blocks=1 00:05:56.910 00:05:56.910 ' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:56.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.910 --rc genhtml_branch_coverage=1 00:05:56.910 --rc genhtml_function_coverage=1 00:05:56.910 --rc genhtml_legend=1 00:05:56.910 --rc geninfo_all_blocks=1 00:05:56.910 --rc geninfo_unexecuted_blocks=1 00:05:56.910 00:05:56.910 ' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:56.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.910 --rc genhtml_branch_coverage=1 00:05:56.910 --rc genhtml_function_coverage=1 00:05:56.910 --rc genhtml_legend=1 00:05:56.910 --rc geninfo_all_blocks=1 00:05:56.910 --rc geninfo_unexecuted_blocks=1 00:05:56.910 00:05:56.910 ' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:56.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.910 --rc genhtml_branch_coverage=1 00:05:56.910 --rc genhtml_function_coverage=1 00:05:56.910 --rc genhtml_legend=1 00:05:56.910 --rc geninfo_all_blocks=1 00:05:56.910 --rc geninfo_unexecuted_blocks=1 00:05:56.910 00:05:56.910 ' 00:05:56.910 11:21:55 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.910 11:21:55 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.910 11:21:55 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.910 11:21:55 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:56.910 11:21:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.910 ************************************ 00:05:56.910 START TEST default_locks 00:05:56.910 ************************************ 00:05:56.910 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58825 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58825 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58825 ']' 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:56.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:56.911 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.911 [2024-11-05 11:21:56.072572] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:56.911 [2024-11-05 11:21:56.072669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58825 ] 00:05:57.172 [2024-11-05 11:21:56.224011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.172 [2024-11-05 11:21:56.305706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.742 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.742 11:21:56 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:05:57.742 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58825 00:05:57.742 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58825 00:05:57.742 11:21:56 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58825 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 58825 ']' 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 58825 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58825 00:05:58.002 killing process with pid 58825 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58825' 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 58825 00:05:58.002 11:21:57 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 58825 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58825 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58825 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:59.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58825 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 58825 ']' 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.386 ERROR: process (pid: 58825) is no longer running 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.386 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (58825) - No such process 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:59.386 00:05:59.386 real 0m2.290s 00:05:59.386 user 0m2.280s 00:05:59.386 sys 0m0.421s 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.386 ************************************ 00:05:59.386 END TEST default_locks 00:05:59.386 ************************************ 00:05:59.386 11:21:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.386 11:21:58 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:59.386 11:21:58 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.386 11:21:58 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.386 11:21:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.386 ************************************ 00:05:59.386 START TEST default_locks_via_rpc 00:05:59.386 ************************************ 00:05:59.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58878 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58878 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 58878 ']' 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.386 11:21:58 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:59.386 [2024-11-05 11:21:58.418406] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:05:59.386 [2024-11-05 11:21:58.418527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58878 ] 00:05:59.386 [2024-11-05 11:21:58.577258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.647 [2024-11-05 11:21:58.680146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58878 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58878 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58878 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 58878 ']' 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 58878 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:00.217 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58878 00:06:00.492 killing process with pid 58878 00:06:00.492 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:00.492 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:00.492 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58878' 00:06:00.492 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 58878 00:06:00.492 11:21:59 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 58878 00:06:01.932 ************************************ 00:06:01.932 END TEST default_locks_via_rpc 00:06:01.932 ************************************ 00:06:01.932 00:06:01.932 real 0m2.800s 00:06:01.932 user 0m2.791s 00:06:01.932 sys 0m0.445s 00:06:01.932 11:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:01.932 11:22:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:01.932 11:22:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:01.932 11:22:01 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:01.932 11:22:01 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:01.932 11:22:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.932 ************************************ 00:06:01.932 START TEST non_locking_app_on_locked_coremask 00:06:01.932 ************************************ 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58941 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58941 /var/tmp/spdk.sock 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58941 ']' 00:06:02.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:02.193 11:22:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.193 [2024-11-05 11:22:01.297412] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:02.193 [2024-11-05 11:22:01.297558] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58941 ] 00:06:02.193 [2024-11-05 11:22:01.461695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.455 [2024-11-05 11:22:01.590684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58957 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58957 /var/tmp/spdk2.sock 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 58957 ']' 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:03.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:03.027 11:22:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.288 [2024-11-05 11:22:02.335855] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:03.288 [2024-11-05 11:22:02.335961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58957 ] 00:06:03.288 [2024-11-05 11:22:02.506455] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:03.288 [2024-11-05 11:22:02.506498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.546 [2024-11-05 11:22:02.704107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.919 11:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:04.919 11:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:04.919 11:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58941 00:06:04.919 11:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:04.919 11:22:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58941 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58941 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58941 ']' 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58941 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58941 00:06:04.919 killing process with pid 58941 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58941' 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58941 00:06:04.919 11:22:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58941 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58957 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 58957 ']' 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 58957 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58957 00:06:07.454 killing process with pid 58957 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58957' 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 58957 00:06:07.454 11:22:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 58957 00:06:08.834 00:06:08.834 real 0m6.532s 00:06:08.834 user 0m6.699s 00:06:08.834 sys 0m0.938s 00:06:08.834 ************************************ 00:06:08.834 END TEST non_locking_app_on_locked_coremask 00:06:08.834 ************************************ 00:06:08.834 11:22:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:08.834 11:22:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.834 11:22:07 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:08.834 11:22:07 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:08.834 11:22:07 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:08.834 11:22:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:08.834 ************************************ 00:06:08.834 START TEST locking_app_on_unlocked_coremask 00:06:08.834 ************************************ 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59058 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59058 /var/tmp/spdk.sock 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59058 ']' 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.834 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:08.834 11:22:07 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:08.834 [2024-11-05 11:22:07.869835] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:08.834 [2024-11-05 11:22:07.869959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59058 ] 00:06:08.834 [2024-11-05 11:22:08.019185] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.834 [2024-11-05 11:22:08.019223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.834 [2024-11-05 11:22:08.099896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59064 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59064 /var/tmp/spdk2.sock 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59064 ']' 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.439 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:09.440 11:22:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:09.697 [2024-11-05 11:22:08.749000] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:09.697 [2024-11-05 11:22:08.749141] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59064 ] 00:06:09.697 [2024-11-05 11:22:08.919180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.954 [2024-11-05 11:22:09.088273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.886 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:10.886 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:10.886 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59064 00:06:10.886 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59064 00:06:10.886 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59058 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59058 ']' 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59058 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59058 00:06:11.148 killing process with pid 59058 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59058' 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59058 00:06:11.148 11:22:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59058 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59064 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59064 ']' 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 59064 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59064 00:06:13.687 killing process with pid 59064 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59064' 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 59064 00:06:13.687 11:22:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 59064 00:06:15.072 00:06:15.072 real 0m6.264s 00:06:15.072 user 0m6.507s 00:06:15.072 sys 0m0.809s 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.072 ************************************ 00:06:15.072 END TEST locking_app_on_unlocked_coremask 00:06:15.072 ************************************ 00:06:15.072 11:22:14 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:15.072 11:22:14 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:15.072 11:22:14 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:15.072 11:22:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.072 ************************************ 00:06:15.072 START TEST locking_app_on_locked_coremask 00:06:15.072 ************************************ 00:06:15.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59166 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59166 /var/tmp/spdk.sock 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59166 ']' 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.072 11:22:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.072 [2024-11-05 11:22:14.166322] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:15.072 [2024-11-05 11:22:14.166433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59166 ] 00:06:15.072 [2024-11-05 11:22:14.323767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.334 [2024-11-05 11:22:14.435305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59182 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59182 /var/tmp/spdk2.sock 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59182 /var/tmp/spdk2.sock 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:15.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59182 /var/tmp/spdk2.sock 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 59182 ']' 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:15.927 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.927 [2024-11-05 11:22:15.119689] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:15.927 [2024-11-05 11:22:15.119851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59182 ] 00:06:16.185 [2024-11-05 11:22:15.303305] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59166 has claimed it. 00:06:16.185 [2024-11-05 11:22:15.303369] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:16.755 ERROR: process (pid: 59182) is no longer running 00:06:16.755 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59182) - No such process 00:06:16.755 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:16.755 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:16.755 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 59166 ']' 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59166 00:06:16.756 killing process with pid 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59166' 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 59166 00:06:16.756 11:22:15 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 59166 00:06:18.673 00:06:18.673 real 0m3.415s 00:06:18.673 user 0m3.652s 00:06:18.673 sys 0m0.536s 00:06:18.673 11:22:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:18.673 ************************************ 00:06:18.673 END TEST locking_app_on_locked_coremask 00:06:18.673 ************************************ 00:06:18.673 11:22:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 11:22:17 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:18.673 11:22:17 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:18.673 11:22:17 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:18.673 11:22:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 ************************************ 00:06:18.673 START TEST locking_overlapped_coremask 00:06:18.673 ************************************ 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59235 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59235 /var/tmp/spdk.sock 00:06:18.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59235 ']' 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.673 11:22:17 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:18.673 [2024-11-05 11:22:17.625603] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:18.673 [2024-11-05 11:22:17.625881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59235 ] 00:06:18.673 [2024-11-05 11:22:17.785691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:18.673 [2024-11-05 11:22:17.886504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.673 [2024-11-05 11:22:17.886947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.673 [2024-11-05 11:22:17.886969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59253 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59253 /var/tmp/spdk2.sock 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59253 /var/tmp/spdk2.sock 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59253 /var/tmp/spdk2.sock 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 59253 ']' 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:19.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:19.243 11:22:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.501 [2024-11-05 11:22:18.537603] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:19.501 [2024-11-05 11:22:18.537898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59253 ] 00:06:19.501 [2024-11-05 11:22:18.713314] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59235 has claimed it. 00:06:19.501 [2024-11-05 11:22:18.713378] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.070 ERROR: process (pid: 59253) is no longer running 00:06:20.070 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59253) - No such process 00:06:20.070 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59235 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 59235 ']' 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 59235 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59235 00:06:20.071 killing process with pid 59235 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59235' 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 59235 00:06:20.071 11:22:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 59235 00:06:21.981 ************************************ 00:06:21.981 END TEST locking_overlapped_coremask 00:06:21.981 ************************************ 00:06:21.981 00:06:21.981 real 0m3.241s 00:06:21.981 user 0m8.757s 00:06:21.981 sys 0m0.419s 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.981 11:22:20 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.981 11:22:20 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:21.981 11:22:20 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:21.981 11:22:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.981 ************************************ 00:06:21.981 START TEST locking_overlapped_coremask_via_rpc 00:06:21.981 ************************************ 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:21.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59306 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59306 /var/tmp/spdk.sock 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59306 ']' 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.981 11:22:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.981 [2024-11-05 11:22:20.930490] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:21.981 [2024-11-05 11:22:20.930635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59306 ] 00:06:21.981 [2024-11-05 11:22:21.090236] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.981 [2024-11-05 11:22:21.090286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.981 [2024-11-05 11:22:21.196891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.981 [2024-11-05 11:22:21.197062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.981 [2024-11-05 11:22:21.197167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59329 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59329 /var/tmp/spdk2.sock 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59329 ']' 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.922 11:22:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.922 [2024-11-05 11:22:21.975153] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:22.922 [2024-11-05 11:22:21.975724] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59329 ] 00:06:22.922 [2024-11-05 11:22:22.149173] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:22.922 [2024-11-05 11:22:22.149225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:23.183 [2024-11-05 11:22:22.356513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:23.183 [2024-11-05 11:22:22.359851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:23.183 [2024-11-05 11:22:22.359856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.567 [2024-11-05 11:22:23.520937] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59306 has claimed it. 00:06:24.567 request: 00:06:24.567 { 00:06:24.567 "method": "framework_enable_cpumask_locks", 00:06:24.567 "req_id": 1 00:06:24.567 } 00:06:24.567 Got JSON-RPC error response 00:06:24.567 response: 00:06:24.567 { 00:06:24.567 "code": -32603, 00:06:24.567 "message": "Failed to claim CPU core: 2" 00:06:24.567 } 00:06:24.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:24.567 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59306 /var/tmp/spdk.sock 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59306 ']' 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59329 /var/tmp/spdk2.sock 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59329 ']' 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:24.568 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.829 ************************************ 00:06:24.829 END TEST locking_overlapped_coremask_via_rpc 00:06:24.829 ************************************ 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:24.829 00:06:24.829 real 0m3.100s 00:06:24.829 user 0m1.194s 00:06:24.829 sys 0m0.150s 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:24.829 11:22:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.829 11:22:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:24.829 11:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59306 ]] 00:06:24.829 11:22:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59306 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59306 ']' 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59306 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59306 00:06:24.829 killing process with pid 59306 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59306' 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59306 00:06:24.829 11:22:23 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59306 00:06:26.742 11:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59329 ]] 00:06:26.742 11:22:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59329 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59329 ']' 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59329 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59329 00:06:26.742 killing process with pid 59329 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59329' 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 59329 00:06:26.742 11:22:25 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 59329 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:27.685 Process with pid 59306 is not found 00:06:27.685 Process with pid 59329 is not found 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59306 ]] 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59306 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59306 ']' 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59306 00:06:27.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59306) - No such process 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59306 is not found' 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59329 ]] 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59329 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 59329 ']' 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 59329 00:06:27.685 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (59329) - No such process 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 59329 is not found' 00:06:27.685 11:22:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:27.685 00:06:27.685 real 0m31.034s 00:06:27.685 user 0m54.590s 00:06:27.685 sys 0m4.560s 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.685 ************************************ 00:06:27.685 END TEST cpu_locks 00:06:27.685 ************************************ 00:06:27.685 11:22:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 ************************************ 00:06:27.685 END TEST event 00:06:27.685 ************************************ 00:06:27.685 00:06:27.685 real 0m56.246s 00:06:27.685 user 1m44.763s 00:06:27.685 sys 0m7.459s 00:06:27.685 11:22:26 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:27.685 11:22:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:27.685 11:22:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:27.685 11:22:26 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:27.685 11:22:26 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.685 11:22:26 -- common/autotest_common.sh@10 -- # set +x 00:06:27.946 ************************************ 00:06:27.946 START TEST thread 00:06:27.946 ************************************ 00:06:27.946 11:22:26 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:27.946 * Looking for test storage... 00:06:27.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:27.946 11:22:27 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.946 11:22:27 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.946 11:22:27 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.946 11:22:27 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.946 11:22:27 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.946 11:22:27 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.946 11:22:27 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.946 11:22:27 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.946 11:22:27 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.946 11:22:27 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.946 11:22:27 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.946 11:22:27 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:27.946 11:22:27 thread -- scripts/common.sh@345 -- # : 1 00:06:27.946 11:22:27 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.946 11:22:27 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.946 11:22:27 thread -- scripts/common.sh@365 -- # decimal 1 00:06:27.946 11:22:27 thread -- scripts/common.sh@353 -- # local d=1 00:06:27.946 11:22:27 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.946 11:22:27 thread -- scripts/common.sh@355 -- # echo 1 00:06:27.946 11:22:27 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.946 11:22:27 thread -- scripts/common.sh@366 -- # decimal 2 00:06:27.946 11:22:27 thread -- scripts/common.sh@353 -- # local d=2 00:06:27.946 11:22:27 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.946 11:22:27 thread -- scripts/common.sh@355 -- # echo 2 00:06:27.946 11:22:27 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.946 11:22:27 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.946 11:22:27 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.946 11:22:27 thread -- scripts/common.sh@368 -- # return 0 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.946 --rc genhtml_branch_coverage=1 00:06:27.946 --rc genhtml_function_coverage=1 00:06:27.946 --rc genhtml_legend=1 00:06:27.946 --rc geninfo_all_blocks=1 00:06:27.946 --rc geninfo_unexecuted_blocks=1 00:06:27.946 00:06:27.946 ' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.946 --rc genhtml_branch_coverage=1 00:06:27.946 --rc genhtml_function_coverage=1 00:06:27.946 --rc genhtml_legend=1 00:06:27.946 --rc geninfo_all_blocks=1 00:06:27.946 --rc geninfo_unexecuted_blocks=1 00:06:27.946 00:06:27.946 ' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.946 --rc genhtml_branch_coverage=1 00:06:27.946 --rc genhtml_function_coverage=1 00:06:27.946 --rc genhtml_legend=1 00:06:27.946 --rc geninfo_all_blocks=1 00:06:27.946 --rc geninfo_unexecuted_blocks=1 00:06:27.946 00:06:27.946 ' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:27.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.946 --rc genhtml_branch_coverage=1 00:06:27.946 --rc genhtml_function_coverage=1 00:06:27.946 --rc genhtml_legend=1 00:06:27.946 --rc geninfo_all_blocks=1 00:06:27.946 --rc geninfo_unexecuted_blocks=1 00:06:27.946 00:06:27.946 ' 00:06:27.946 11:22:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:27.946 11:22:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:27.946 ************************************ 00:06:27.946 START TEST thread_poller_perf 00:06:27.946 ************************************ 00:06:27.946 11:22:27 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:27.946 [2024-11-05 11:22:27.133540] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:27.946 [2024-11-05 11:22:27.133754] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59490 ] 00:06:28.207 [2024-11-05 11:22:27.292850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.207 [2024-11-05 11:22:27.392816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.207 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:29.594 [2024-11-05T11:22:28.868Z] ====================================== 00:06:29.594 [2024-11-05T11:22:28.868Z] busy:2612170734 (cyc) 00:06:29.594 [2024-11-05T11:22:28.868Z] total_run_count: 307000 00:06:29.594 [2024-11-05T11:22:28.868Z] tsc_hz: 2600000000 (cyc) 00:06:29.594 [2024-11-05T11:22:28.868Z] ====================================== 00:06:29.594 [2024-11-05T11:22:28.868Z] poller_cost: 8508 (cyc), 3272 (nsec) 00:06:29.594 00:06:29.594 ************************************ 00:06:29.594 END TEST thread_poller_perf 00:06:29.594 ************************************ 00:06:29.594 real 0m1.446s 00:06:29.594 user 0m1.272s 00:06:29.594 sys 0m0.067s 00:06:29.594 11:22:28 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:29.594 11:22:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:29.594 11:22:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.594 11:22:28 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:06:29.594 11:22:28 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:29.594 11:22:28 thread -- common/autotest_common.sh@10 -- # set +x 00:06:29.594 ************************************ 00:06:29.594 START TEST thread_poller_perf 00:06:29.594 ************************************ 00:06:29.594 11:22:28 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:29.594 [2024-11-05 11:22:28.621196] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:29.594 [2024-11-05 11:22:28.621428] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59526 ] 00:06:29.594 [2024-11-05 11:22:28.782228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.855 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:29.855 [2024-11-05 11:22:28.882936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.796 [2024-11-05T11:22:30.070Z] ====================================== 00:06:30.796 [2024-11-05T11:22:30.070Z] busy:2603559128 (cyc) 00:06:30.796 [2024-11-05T11:22:30.070Z] total_run_count: 3899000 00:06:30.796 [2024-11-05T11:22:30.070Z] tsc_hz: 2600000000 (cyc) 00:06:30.796 [2024-11-05T11:22:30.070Z] ====================================== 00:06:30.796 [2024-11-05T11:22:30.071Z] poller_cost: 667 (cyc), 256 (nsec) 00:06:30.797 00:06:30.797 real 0m1.445s 00:06:30.797 user 0m1.268s 00:06:30.797 sys 0m0.069s 00:06:30.797 11:22:30 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.797 ************************************ 00:06:30.797 END TEST thread_poller_perf 00:06:30.797 ************************************ 00:06:30.797 11:22:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:30.797 11:22:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:30.797 ************************************ 00:06:30.797 END TEST thread 00:06:30.797 ************************************ 00:06:30.797 00:06:30.797 real 0m3.105s 00:06:30.797 user 0m2.657s 00:06:30.797 sys 0m0.232s 00:06:30.797 11:22:30 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.797 11:22:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.058 11:22:30 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:31.058 11:22:30 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:31.058 11:22:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:31.058 11:22:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:31.058 11:22:30 -- common/autotest_common.sh@10 -- # set +x 00:06:31.058 ************************************ 00:06:31.058 START TEST app_cmdline 00:06:31.058 ************************************ 00:06:31.058 11:22:30 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:31.058 * Looking for test storage... 00:06:31.058 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:31.058 11:22:30 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:31.058 11:22:30 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:06:31.058 11:22:30 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:31.058 11:22:30 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:31.058 11:22:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:31.059 11:22:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:31.059 11:22:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:31.059 11:22:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:31.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.059 --rc genhtml_branch_coverage=1 00:06:31.059 --rc genhtml_function_coverage=1 00:06:31.059 --rc genhtml_legend=1 00:06:31.059 --rc geninfo_all_blocks=1 00:06:31.059 --rc geninfo_unexecuted_blocks=1 00:06:31.059 00:06:31.059 ' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:31.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.059 --rc genhtml_branch_coverage=1 00:06:31.059 --rc genhtml_function_coverage=1 00:06:31.059 --rc genhtml_legend=1 00:06:31.059 --rc geninfo_all_blocks=1 00:06:31.059 --rc geninfo_unexecuted_blocks=1 00:06:31.059 00:06:31.059 ' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:31.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.059 --rc genhtml_branch_coverage=1 00:06:31.059 --rc genhtml_function_coverage=1 00:06:31.059 --rc genhtml_legend=1 00:06:31.059 --rc geninfo_all_blocks=1 00:06:31.059 --rc geninfo_unexecuted_blocks=1 00:06:31.059 00:06:31.059 ' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:31.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:31.059 --rc genhtml_branch_coverage=1 00:06:31.059 --rc genhtml_function_coverage=1 00:06:31.059 --rc genhtml_legend=1 00:06:31.059 --rc geninfo_all_blocks=1 00:06:31.059 --rc geninfo_unexecuted_blocks=1 00:06:31.059 00:06:31.059 ' 00:06:31.059 11:22:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:31.059 11:22:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59610 00:06:31.059 11:22:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:31.059 11:22:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59610 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 59610 ']' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.059 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.059 11:22:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:31.059 [2024-11-05 11:22:30.304721] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:31.059 [2024-11-05 11:22:30.305039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59610 ] 00:06:31.317 [2024-11-05 11:22:30.466444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.317 [2024-11-05 11:22:30.568376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.926 11:22:31 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.926 11:22:31 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:06:31.926 11:22:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:32.185 { 00:06:32.185 "version": "SPDK v25.01-pre git sha1 1aeff8917", 00:06:32.185 "fields": { 00:06:32.185 "major": 25, 00:06:32.185 "minor": 1, 00:06:32.185 "patch": 0, 00:06:32.185 "suffix": "-pre", 00:06:32.185 "commit": "1aeff8917" 00:06:32.185 } 00:06:32.185 } 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:32.185 11:22:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:32.185 11:22:31 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:32.443 request: 00:06:32.443 { 00:06:32.443 "method": "env_dpdk_get_mem_stats", 00:06:32.443 "req_id": 1 00:06:32.443 } 00:06:32.443 Got JSON-RPC error response 00:06:32.443 response: 00:06:32.443 { 00:06:32.443 "code": -32601, 00:06:32.443 "message": "Method not found" 00:06:32.443 } 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.443 11:22:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59610 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 59610 ']' 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 59610 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59610 00:06:32.443 killing process with pid 59610 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59610' 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@971 -- # kill 59610 00:06:32.443 11:22:31 app_cmdline -- common/autotest_common.sh@976 -- # wait 59610 00:06:33.816 ************************************ 00:06:33.816 END TEST app_cmdline 00:06:33.816 ************************************ 00:06:33.816 00:06:33.816 real 0m2.960s 00:06:33.816 user 0m3.273s 00:06:33.816 sys 0m0.410s 00:06:33.816 11:22:33 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:33.816 11:22:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:33.816 11:22:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:33.816 11:22:33 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:33.816 11:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:33.816 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:06:34.073 ************************************ 00:06:34.073 START TEST version 00:06:34.073 ************************************ 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:34.073 * Looking for test storage... 00:06:34.073 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.073 11:22:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.073 11:22:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.073 11:22:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.073 11:22:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.073 11:22:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.073 11:22:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.073 11:22:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.073 11:22:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.073 11:22:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.073 11:22:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.073 11:22:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.073 11:22:33 version -- scripts/common.sh@344 -- # case "$op" in 00:06:34.073 11:22:33 version -- scripts/common.sh@345 -- # : 1 00:06:34.073 11:22:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.073 11:22:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.073 11:22:33 version -- scripts/common.sh@365 -- # decimal 1 00:06:34.073 11:22:33 version -- scripts/common.sh@353 -- # local d=1 00:06:34.073 11:22:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.073 11:22:33 version -- scripts/common.sh@355 -- # echo 1 00:06:34.073 11:22:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.073 11:22:33 version -- scripts/common.sh@366 -- # decimal 2 00:06:34.073 11:22:33 version -- scripts/common.sh@353 -- # local d=2 00:06:34.073 11:22:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.073 11:22:33 version -- scripts/common.sh@355 -- # echo 2 00:06:34.073 11:22:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.073 11:22:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.073 11:22:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.073 11:22:33 version -- scripts/common.sh@368 -- # return 0 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.073 --rc genhtml_branch_coverage=1 00:06:34.073 --rc genhtml_function_coverage=1 00:06:34.073 --rc genhtml_legend=1 00:06:34.073 --rc geninfo_all_blocks=1 00:06:34.073 --rc geninfo_unexecuted_blocks=1 00:06:34.073 00:06:34.073 ' 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.073 --rc genhtml_branch_coverage=1 00:06:34.073 --rc genhtml_function_coverage=1 00:06:34.073 --rc genhtml_legend=1 00:06:34.073 --rc geninfo_all_blocks=1 00:06:34.073 --rc geninfo_unexecuted_blocks=1 00:06:34.073 00:06:34.073 ' 00:06:34.073 11:22:33 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.073 --rc genhtml_branch_coverage=1 00:06:34.073 --rc genhtml_function_coverage=1 00:06:34.074 --rc genhtml_legend=1 00:06:34.074 --rc geninfo_all_blocks=1 00:06:34.074 --rc geninfo_unexecuted_blocks=1 00:06:34.074 00:06:34.074 ' 00:06:34.074 11:22:33 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.074 --rc genhtml_branch_coverage=1 00:06:34.074 --rc genhtml_function_coverage=1 00:06:34.074 --rc genhtml_legend=1 00:06:34.074 --rc geninfo_all_blocks=1 00:06:34.074 --rc geninfo_unexecuted_blocks=1 00:06:34.074 00:06:34.074 ' 00:06:34.074 11:22:33 version -- app/version.sh@17 -- # get_header_version major 00:06:34.074 11:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # cut -f2 00:06:34.074 11:22:33 version -- app/version.sh@17 -- # major=25 00:06:34.074 11:22:33 version -- app/version.sh@18 -- # get_header_version minor 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # cut -f2 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.074 11:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:34.074 11:22:33 version -- app/version.sh@18 -- # minor=1 00:06:34.074 11:22:33 version -- app/version.sh@19 -- # get_header_version patch 00:06:34.074 11:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # cut -f2 00:06:34.074 11:22:33 version -- app/version.sh@19 -- # patch=0 00:06:34.074 11:22:33 version -- app/version.sh@20 -- # get_header_version suffix 00:06:34.074 11:22:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # tr -d '"' 00:06:34.074 11:22:33 version -- app/version.sh@14 -- # cut -f2 00:06:34.074 11:22:33 version -- app/version.sh@20 -- # suffix=-pre 00:06:34.074 11:22:33 version -- app/version.sh@22 -- # version=25.1 00:06:34.074 11:22:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:34.074 11:22:33 version -- app/version.sh@28 -- # version=25.1rc0 00:06:34.074 11:22:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:34.074 11:22:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:34.074 11:22:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:34.074 11:22:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:34.074 00:06:34.074 real 0m0.175s 00:06:34.074 user 0m0.114s 00:06:34.074 sys 0m0.082s 00:06:34.074 ************************************ 00:06:34.074 END TEST version 00:06:34.074 ************************************ 00:06:34.074 11:22:33 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:34.074 11:22:33 version -- common/autotest_common.sh@10 -- # set +x 00:06:34.074 11:22:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:34.074 11:22:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:34.074 11:22:33 -- spdk/autotest.sh@194 -- # uname -s 00:06:34.074 11:22:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:34.074 11:22:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:34.074 11:22:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:34.074 11:22:33 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:34.074 11:22:33 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:34.074 11:22:33 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:34.074 11:22:33 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:34.074 11:22:33 -- common/autotest_common.sh@10 -- # set +x 00:06:34.074 ************************************ 00:06:34.074 START TEST blockdev_nvme 00:06:34.074 ************************************ 00:06:34.074 11:22:33 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:34.331 * Looking for test storage... 00:06:34.331 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.331 11:22:33 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.331 --rc genhtml_branch_coverage=1 00:06:34.331 --rc genhtml_function_coverage=1 00:06:34.331 --rc genhtml_legend=1 00:06:34.331 --rc geninfo_all_blocks=1 00:06:34.331 --rc geninfo_unexecuted_blocks=1 00:06:34.331 00:06:34.331 ' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.331 --rc genhtml_branch_coverage=1 00:06:34.331 --rc genhtml_function_coverage=1 00:06:34.331 --rc genhtml_legend=1 00:06:34.331 --rc geninfo_all_blocks=1 00:06:34.331 --rc geninfo_unexecuted_blocks=1 00:06:34.331 00:06:34.331 ' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.331 --rc genhtml_branch_coverage=1 00:06:34.331 --rc genhtml_function_coverage=1 00:06:34.331 --rc genhtml_legend=1 00:06:34.331 --rc geninfo_all_blocks=1 00:06:34.331 --rc geninfo_unexecuted_blocks=1 00:06:34.331 00:06:34.331 ' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:34.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.331 --rc genhtml_branch_coverage=1 00:06:34.331 --rc genhtml_function_coverage=1 00:06:34.331 --rc genhtml_legend=1 00:06:34.331 --rc geninfo_all_blocks=1 00:06:34.331 --rc geninfo_unexecuted_blocks=1 00:06:34.331 00:06:34.331 ' 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.331 11:22:33 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=59782 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 59782 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 59782 ']' 00:06:34.331 11:22:33 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.331 11:22:33 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:34.332 11:22:33 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.332 11:22:33 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:34.332 11:22:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:34.332 [2024-11-05 11:22:33.503868] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:34.332 [2024-11-05 11:22:33.503966] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59782 ] 00:06:34.588 [2024-11-05 11:22:33.654504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.588 [2024-11-05 11:22:33.740120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.152 11:22:34 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:35.152 11:22:34 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.152 11:22:34 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:35.152 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.152 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:35.409 11:22:34 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:35.409 11:22:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:35.666 11:22:34 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:35.666 11:22:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "68ad26c5-70f2-45b2-9cb4-22bca0b36f89"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "68ad26c5-70f2-45b2-9cb4-22bca0b36f89",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "76c53d4f-bdde-4e55-bbd9-c2134b67dca7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "76c53d4f-bdde-4e55-bbd9-c2134b67dca7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "aaeda86c-cfe5-4235-900d-ecfd3c8d4400"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "aaeda86c-cfe5-4235-900d-ecfd3c8d4400",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "f144879c-3c23-4e46-8c7b-6371f6ea72b2"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "f144879c-3c23-4e46-8c7b-6371f6ea72b2",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "407c6aec-12ce-4ea3-ab1b-dbe9a58dcdcd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "407c6aec-12ce-4ea3-ab1b-dbe9a58dcdcd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "d0c9b552-2d45-4111-9631-10757204cb31"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "d0c9b552-2d45-4111-9631-10757204cb31",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:35.667 11:22:34 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 59782 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 59782 ']' 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 59782 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59782 00:06:35.667 killing process with pid 59782 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59782' 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 59782 00:06:35.667 11:22:34 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 59782 00:06:37.061 11:22:35 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:37.061 11:22:35 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.061 11:22:35 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:06:37.061 11:22:35 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:37.061 11:22:35 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:37.061 ************************************ 00:06:37.061 START TEST bdev_hello_world 00:06:37.061 ************************************ 00:06:37.061 11:22:35 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:37.061 [2024-11-05 11:22:36.015946] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:37.061 [2024-11-05 11:22:36.016063] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59860 ] 00:06:37.061 [2024-11-05 11:22:36.169761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.061 [2024-11-05 11:22:36.252444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.623 [2024-11-05 11:22:36.750289] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:37.623 [2024-11-05 11:22:36.750330] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:37.623 [2024-11-05 11:22:36.750349] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:37.623 [2024-11-05 11:22:36.752767] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:37.623 [2024-11-05 11:22:36.753137] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:37.623 [2024-11-05 11:22:36.753164] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:37.623 [2024-11-05 11:22:36.753272] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:37.623 00:06:37.624 [2024-11-05 11:22:36.753289] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.556 ************************************ 00:06:38.556 END TEST bdev_hello_world 00:06:38.556 ************************************ 00:06:38.556 00:06:38.556 real 0m1.530s 00:06:38.556 user 0m1.264s 00:06:38.556 sys 0m0.158s 00:06:38.556 11:22:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:38.556 11:22:37 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:38.556 11:22:37 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:38.556 11:22:37 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:06:38.556 11:22:37 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:38.556 11:22:37 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:38.556 ************************************ 00:06:38.556 START TEST bdev_bounds 00:06:38.556 ************************************ 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=59897 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 59897' 00:06:38.556 Process bdevio pid: 59897 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 59897 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 59897 ']' 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:38.556 11:22:37 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:38.556 [2024-11-05 11:22:37.584429] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:38.556 [2024-11-05 11:22:37.584716] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59897 ] 00:06:38.556 [2024-11-05 11:22:37.752295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.813 [2024-11-05 11:22:37.854659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.813 [2024-11-05 11:22:37.854695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.813 [2024-11-05 11:22:37.854700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.379 11:22:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:39.379 11:22:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:06:39.379 11:22:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.379 I/O targets: 00:06:39.379 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.379 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:39.379 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.379 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.379 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.379 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.379 00:06:39.379 00:06:39.379 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.379 http://cunit.sourceforge.net/ 00:06:39.379 00:06:39.379 00:06:39.379 Suite: bdevio tests on: Nvme3n1 00:06:39.379 Test: blockdev write read block ...passed 00:06:39.379 Test: blockdev write zeroes read block ...passed 00:06:39.379 Test: blockdev write zeroes read no split ...passed 00:06:39.379 Test: blockdev write zeroes read split ...passed 00:06:39.379 Test: blockdev write zeroes read split partial ...passed 00:06:39.379 Test: blockdev reset ...[2024-11-05 11:22:38.592027] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:39.379 [2024-11-05 11:22:38.595043] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller passedsuccessful. 00:06:39.379 00:06:39.379 Test: blockdev write read 8 blocks ...passed 00:06:39.379 Test: blockdev write read size > 128k ...passed 00:06:39.379 Test: blockdev write read invalid size ...passed 00:06:39.379 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.379 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.379 Test: blockdev write read max offset ...passed 00:06:39.379 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.379 Test: blockdev writev readv 8 blocks ...passed 00:06:39.379 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.379 Test: blockdev writev readv block ...passed 00:06:39.379 Test: blockdev writev readv size > 128k ...passed 00:06:39.379 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.379 Test: blockdev comparev and writev ...[2024-11-05 11:22:38.601327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c020a000 len:0x1000 00:06:39.379 [2024-11-05 11:22:38.601454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.379 passed 00:06:39.379 Test: blockdev nvme passthru rw ...passed 00:06:39.379 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:22:38.602052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.379 [2024-11-05 11:22:38.602148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.379 passed 00:06:39.379 Test: blockdev nvme admin passthru ...passed 00:06:39.379 Test: blockdev copy ...passed 00:06:39.379 Suite: bdevio tests on: Nvme2n3 00:06:39.379 Test: blockdev write read block ...passed 00:06:39.637 Test: blockdev write zeroes read block ...passed 00:06:39.637 Test: blockdev write zeroes read no split ...passed 00:06:39.637 Test: blockdev write zeroes read split ...passed 00:06:39.637 Test: blockdev write zeroes read split partial ...passed 00:06:39.637 Test: blockdev reset ...[2024-11-05 11:22:38.769052] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.637 [2024-11-05 11:22:38.772038] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.637 passed 00:06:39.637 Test: blockdev write read 8 blocks ...passed 00:06:39.637 Test: blockdev write read size > 128k ...passed 00:06:39.637 Test: blockdev write read invalid size ...passed 00:06:39.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.637 Test: blockdev write read max offset ...passed 00:06:39.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.637 Test: blockdev writev readv 8 blocks ...passed 00:06:39.637 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.637 Test: blockdev writev readv block ...passed 00:06:39.637 Test: blockdev writev readv size > 128k ...passed 00:06:39.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.637 Test: blockdev comparev and writev ...[2024-11-05 11:22:38.782290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c4606000 len:0x1000 00:06:39.637 [2024-11-05 11:22:38.782405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.637 passed 00:06:39.637 Test: blockdev nvme passthru rw ...passed 00:06:39.637 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:22:38.783185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.637 [2024-11-05 11:22:38.783209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.637 passed 00:06:39.637 Test: blockdev nvme admin passthru ...passed 00:06:39.637 Test: blockdev copy ...passed 00:06:39.637 Suite: bdevio tests on: Nvme2n2 00:06:39.637 Test: blockdev write read block ...passed 00:06:39.637 Test: blockdev write zeroes read block ...passed 00:06:39.637 Test: blockdev write zeroes read no split ...passed 00:06:39.637 Test: blockdev write zeroes read split ...passed 00:06:39.637 Test: blockdev write zeroes read split partial ...passed 00:06:39.637 Test: blockdev reset ...[2024-11-05 11:22:38.890492] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.637 [2024-11-05 11:22:38.893255] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.637 passed 00:06:39.637 Test: blockdev write read 8 blocks ...passed 00:06:39.637 Test: blockdev write read size > 128k ...passed 00:06:39.637 Test: blockdev write read invalid size ...passed 00:06:39.637 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.637 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.637 Test: blockdev write read max offset ...passed 00:06:39.637 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.637 Test: blockdev writev readv 8 blocks ...passed 00:06:39.637 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.637 Test: blockdev writev readv block ...passed 00:06:39.637 Test: blockdev writev readv size > 128k ...passed 00:06:39.637 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.637 Test: blockdev comparev and writev ...[2024-11-05 11:22:38.898875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e083c000 len:0x1000 00:06:39.637 [2024-11-05 11:22:38.898986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.637 passed 00:06:39.637 Test: blockdev nvme passthru rw ...passed 00:06:39.637 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:22:38.899664] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.637 [2024-11-05 11:22:38.899746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.637 passed 00:06:39.637 Test: blockdev nvme admin passthru ...passed 00:06:39.637 Test: blockdev copy ...passed 00:06:39.637 Suite: bdevio tests on: Nvme2n1 00:06:39.637 Test: blockdev write read block ...passed 00:06:39.895 Test: blockdev write zeroes read block ...passed 00:06:39.895 Test: blockdev write zeroes read no split ...passed 00:06:39.895 Test: blockdev write zeroes read split ...passed 00:06:39.895 Test: blockdev write zeroes read split partial ...passed 00:06:39.895 Test: blockdev reset ...[2024-11-05 11:22:38.961627] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.895 [2024-11-05 11:22:38.966319] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.895 passed 00:06:39.895 Test: blockdev write read 8 blocks ...passed 00:06:39.895 Test: blockdev write read size > 128k ...passed 00:06:39.895 Test: blockdev write read invalid size ...passed 00:06:39.895 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.895 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.895 Test: blockdev write read max offset ...passed 00:06:39.895 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.895 Test: blockdev writev readv 8 blocks ...passed 00:06:39.895 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.895 Test: blockdev writev readv block ...passed 00:06:39.895 Test: blockdev writev readv size > 128k ...passed 00:06:39.895 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.895 Test: blockdev comparev and writev ...[2024-11-05 11:22:38.972519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:06:39.895 Test: blockdev nvme passthru rw ...passed 00:06:39.895 Test: blockdev nvme passthru vendor specific ...SGL DATA BLOCK ADDRESS 0x2e0838000 len:0x1000 00:06:39.895 [2024-11-05 11:22:38.972620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.895 passed 00:06:39.895 Test: blockdev nvme admin passthru ...[2024-11-05 11:22:38.973285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.895 [2024-11-05 11:22:38.973307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.895 passed 00:06:39.895 Test: blockdev copy ...passed 00:06:39.896 Suite: bdevio tests on: Nvme1n1 00:06:39.896 Test: blockdev write read block ...passed 00:06:39.896 Test: blockdev write zeroes read block ...passed 00:06:39.896 Test: blockdev write zeroes read no split ...passed 00:06:39.896 Test: blockdev write zeroes read split ...passed 00:06:39.896 Test: blockdev write zeroes read split partial ...passed 00:06:39.896 Test: blockdev reset ...[2024-11-05 11:22:39.059664] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.896 [2024-11-05 11:22:39.063210] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:39.896 passed 00:06:39.896 Test: blockdev write read 8 blocks ...passed 00:06:39.896 Test: blockdev write read size > 128k ...passed 00:06:39.896 Test: blockdev write read invalid size ...passed 00:06:39.896 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.896 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.896 Test: blockdev write read max offset ...passed 00:06:39.896 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.896 Test: blockdev writev readv 8 blocks ...passed 00:06:39.896 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.896 Test: blockdev writev readv block ...passed 00:06:39.896 Test: blockdev writev readv size > 128k ...passed 00:06:39.896 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.896 Test: blockdev comparev and writev ...[2024-11-05 11:22:39.069142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e0834000 len:0x1000 00:06:39.896 [2024-11-05 11:22:39.069178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.896 passed 00:06:39.896 Test: blockdev nvme passthru rw ...passed 00:06:39.896 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.896 Test: blockdev nvme admin passthru ...[2024-11-05 11:22:39.069641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.896 [2024-11-05 11:22:39.069666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.896 passed 00:06:39.896 Test: blockdev copy ...passed 00:06:39.896 Suite: bdevio tests on: Nvme0n1 00:06:39.896 Test: blockdev write read block ...passed 00:06:39.896 Test: blockdev write zeroes read block ...passed 00:06:39.896 Test: blockdev write zeroes read no split ...passed 00:06:39.896 Test: blockdev write zeroes read split ...passed 00:06:39.896 Test: blockdev write zeroes read split partial ...passed 00:06:39.896 Test: blockdev reset ...[2024-11-05 11:22:39.168649] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:39.896 [2024-11-05 11:22:39.171424] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:39.896 passed 00:06:39.896 Test: blockdev write read 8 blocks ...passed 00:06:40.154 Test: blockdev write read size > 128k ...passed 00:06:40.154 Test: blockdev write read invalid size ...passed 00:06:40.154 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:40.154 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:40.154 Test: blockdev write read max offset ...passed 00:06:40.154 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:40.154 Test: blockdev writev readv 8 blocks ...passed 00:06:40.154 Test: blockdev writev readv 30 x 1block ...passed 00:06:40.154 Test: blockdev writev readv block ...passed 00:06:40.154 Test: blockdev writev readv size > 128k ...passed 00:06:40.154 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:40.154 Test: blockdev comparev and writev ...passed 00:06:40.154 Test: blockdev nvme passthru rw ...[2024-11-05 11:22:39.178052] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:40.154 separate metadata which is not supported yet. 00:06:40.154 passed 00:06:40.154 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:22:39.178545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:40.154 [2024-11-05 11:22:39.178652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0passed 00:06:40.154 Test: blockdev nvme admin passthru ... sqhd:0017 p:1 m:0 dnr:1 00:06:40.154 passed 00:06:40.154 Test: blockdev copy ...passed 00:06:40.154 00:06:40.154 Run Summary: Type Total Ran Passed Failed Inactive 00:06:40.154 suites 6 6 n/a 0 0 00:06:40.154 tests 138 138 138 0 0 00:06:40.154 asserts 893 893 893 0 n/a 00:06:40.154 00:06:40.154 Elapsed time = 1.622 seconds 00:06:40.154 0 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 59897 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 59897 ']' 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 59897 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59897 00:06:40.154 killing process with pid 59897 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59897' 00:06:40.154 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 59897 00:06:40.155 11:22:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 59897 00:06:45.450 11:22:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:45.450 00:06:45.450 real 0m6.767s 00:06:45.450 user 0m16.458s 00:06:45.450 sys 0m0.345s 00:06:45.450 11:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:45.450 11:22:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:45.450 ************************************ 00:06:45.450 END TEST bdev_bounds 00:06:45.450 ************************************ 00:06:45.450 11:22:44 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:45.450 11:22:44 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:06:45.450 11:22:44 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:45.450 11:22:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:45.450 ************************************ 00:06:45.450 START TEST bdev_nbd 00:06:45.450 ************************************ 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:45.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=59962 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 59962 /var/tmp/spdk-nbd.sock 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 59962 ']' 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:45.450 11:22:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:45.450 [2024-11-05 11:22:44.402556] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:45.450 [2024-11-05 11:22:44.402692] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:45.450 [2024-11-05 11:22:44.574377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.450 [2024-11-05 11:22:44.671428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.050 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.310 1+0 records in 00:06:46.310 1+0 records out 00:06:46.310 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553852 s, 7.4 MB/s 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.310 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.568 1+0 records in 00:06:46.568 1+0 records out 00:06:46.568 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000409036 s, 10.0 MB/s 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.568 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.827 1+0 records in 00:06:46.827 1+0 records out 00:06:46.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039443 s, 10.4 MB/s 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:46.827 11:22:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.085 1+0 records in 00:06:47.085 1+0 records out 00:06:47.085 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000326955 s, 12.5 MB/s 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.085 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.343 1+0 records in 00:06:47.343 1+0 records out 00:06:47.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000537369 s, 7.6 MB/s 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:47.343 1+0 records in 00:06:47.343 1+0 records out 00:06:47.343 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000424968 s, 9.6 MB/s 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:47.343 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd0", 00:06:47.601 "bdev_name": "Nvme0n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd1", 00:06:47.601 "bdev_name": "Nvme1n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd2", 00:06:47.601 "bdev_name": "Nvme2n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd3", 00:06:47.601 "bdev_name": "Nvme2n2" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd4", 00:06:47.601 "bdev_name": "Nvme2n3" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd5", 00:06:47.601 "bdev_name": "Nvme3n1" 00:06:47.601 } 00:06:47.601 ]' 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd0", 00:06:47.601 "bdev_name": "Nvme0n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd1", 00:06:47.601 "bdev_name": "Nvme1n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd2", 00:06:47.601 "bdev_name": "Nvme2n1" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd3", 00:06:47.601 "bdev_name": "Nvme2n2" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd4", 00:06:47.601 "bdev_name": "Nvme2n3" 00:06:47.601 }, 00:06:47.601 { 00:06:47.601 "nbd_device": "/dev/nbd5", 00:06:47.601 "bdev_name": "Nvme3n1" 00:06:47.601 } 00:06:47.601 ]' 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.601 11:22:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.859 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.116 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.116 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.116 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.116 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.117 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.375 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.633 11:22:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.925 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.184 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:49.184 /dev/nbd0 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.442 1+0 records in 00:06:49.442 1+0 records out 00:06:49.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760042 s, 5.4 MB/s 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:49.442 /dev/nbd1 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.442 1+0 records in 00:06:49.442 1+0 records out 00:06:49.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000411457 s, 10.0 MB/s 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:49.442 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:49.701 /dev/nbd10 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.701 1+0 records in 00:06:49.701 1+0 records out 00:06:49.701 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000639035 s, 6.4 MB/s 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.701 11:22:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:49.959 /dev/nbd11 00:06:49.959 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:49.959 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:49.960 1+0 records in 00:06:49.960 1+0 records out 00:06:49.960 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000603263 s, 6.8 MB/s 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:49.960 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:50.218 /dev/nbd12 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.218 1+0 records in 00:06:50.218 1+0 records out 00:06:50.218 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567649 s, 7.2 MB/s 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.218 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:50.476 /dev/nbd13 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:50.476 1+0 records in 00:06:50.476 1+0 records out 00:06:50.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649838 s, 6.3 MB/s 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:50.476 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd0", 00:06:50.735 "bdev_name": "Nvme0n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd1", 00:06:50.735 "bdev_name": "Nvme1n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd10", 00:06:50.735 "bdev_name": "Nvme2n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd11", 00:06:50.735 "bdev_name": "Nvme2n2" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd12", 00:06:50.735 "bdev_name": "Nvme2n3" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd13", 00:06:50.735 "bdev_name": "Nvme3n1" 00:06:50.735 } 00:06:50.735 ]' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd0", 00:06:50.735 "bdev_name": "Nvme0n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd1", 00:06:50.735 "bdev_name": "Nvme1n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd10", 00:06:50.735 "bdev_name": "Nvme2n1" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd11", 00:06:50.735 "bdev_name": "Nvme2n2" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd12", 00:06:50.735 "bdev_name": "Nvme2n3" 00:06:50.735 }, 00:06:50.735 { 00:06:50.735 "nbd_device": "/dev/nbd13", 00:06:50.735 "bdev_name": "Nvme3n1" 00:06:50.735 } 00:06:50.735 ]' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:50.735 /dev/nbd1 00:06:50.735 /dev/nbd10 00:06:50.735 /dev/nbd11 00:06:50.735 /dev/nbd12 00:06:50.735 /dev/nbd13' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:50.735 /dev/nbd1 00:06:50.735 /dev/nbd10 00:06:50.735 /dev/nbd11 00:06:50.735 /dev/nbd12 00:06:50.735 /dev/nbd13' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:50.735 256+0 records in 00:06:50.735 256+0 records out 00:06:50.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00829258 s, 126 MB/s 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:50.735 256+0 records in 00:06:50.735 256+0 records out 00:06:50.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0602531 s, 17.4 MB/s 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.735 11:22:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:50.993 256+0 records in 00:06:50.993 256+0 records out 00:06:50.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0683012 s, 15.4 MB/s 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:50.993 256+0 records in 00:06:50.993 256+0 records out 00:06:50.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647824 s, 16.2 MB/s 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:50.993 256+0 records in 00:06:50.993 256+0 records out 00:06:50.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642582 s, 16.3 MB/s 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:50.993 256+0 records in 00:06:50.993 256+0 records out 00:06:50.993 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0574067 s, 18.3 MB/s 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:50.993 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:51.251 256+0 records in 00:06:51.251 256+0 records out 00:06:51.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647409 s, 16.2 MB/s 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.251 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.509 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:51.766 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:51.766 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:51.766 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:51.766 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:51.766 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:51.767 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:51.767 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:51.767 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:51.767 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:51.767 11:22:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.057 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.324 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:52.582 11:22:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:52.840 malloc_lvol_verify 00:06:52.840 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:53.098 d36a46d0-3bf8-44c5-b460-48d0cf313a4c 00:06:53.098 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:53.098 4de52bbf-0d4e-41cd-9898-414b4bb420af 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:53.356 /dev/nbd0 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:53.356 mke2fs 1.47.0 (5-Feb-2023) 00:06:53.356 Discarding device blocks: 0/4096 done 00:06:53.356 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:53.356 00:06:53.356 Allocating group tables: 0/1 done 00:06:53.356 Writing inode tables: 0/1 done 00:06:53.356 Creating journal (1024 blocks): done 00:06:53.356 Writing superblocks and filesystem accounting information: 0/1 done 00:06:53.356 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.356 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 59962 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 59962 ']' 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 59962 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59962 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.614 killing process with pid 59962 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59962' 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 59962 00:06:53.614 11:22:52 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 59962 00:06:54.178 11:22:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:54.178 00:06:54.178 real 0m9.057s 00:06:54.178 user 0m13.079s 00:06:54.178 sys 0m2.938s 00:06:54.178 ************************************ 00:06:54.178 END TEST bdev_nbd 00:06:54.178 ************************************ 00:06:54.178 11:22:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:54.178 11:22:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:54.178 11:22:53 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:54.178 11:22:53 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:54.178 skipping fio tests on NVMe due to multi-ns failures. 00:06:54.178 11:22:53 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:54.178 11:22:53 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:54.178 11:22:53 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.178 11:22:53 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:06:54.178 11:22:53 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:54.178 11:22:53 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:54.178 ************************************ 00:06:54.178 START TEST bdev_verify 00:06:54.178 ************************************ 00:06:54.178 11:22:53 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:54.435 [2024-11-05 11:22:53.491031] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:06:54.435 [2024-11-05 11:22:53.491146] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60330 ] 00:06:54.435 [2024-11-05 11:22:53.646473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.693 [2024-11-05 11:22:53.723647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.693 [2024-11-05 11:22:53.723659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.258 Running I/O for 5 seconds... 00:06:57.123 22528.00 IOPS, 88.00 MiB/s [2024-11-05T11:22:57.770Z] 24320.00 IOPS, 95.00 MiB/s [2024-11-05T11:22:58.703Z] 24170.67 IOPS, 94.42 MiB/s [2024-11-05T11:22:59.637Z] 23904.00 IOPS, 93.38 MiB/s [2024-11-05T11:22:59.637Z] 23987.20 IOPS, 93.70 MiB/s 00:07:00.363 Latency(us) 00:07:00.363 [2024-11-05T11:22:59.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.363 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0xbd0bd 00:07:00.363 Nvme0n1 : 5.05 1990.02 7.77 0.00 0.00 64084.71 6427.57 61301.37 00:07:00.363 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:00.363 Nvme0n1 : 5.05 1952.69 7.63 0.00 0.00 65366.05 10737.82 69367.34 00:07:00.363 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0xa0000 00:07:00.363 Nvme1n1 : 5.05 1988.70 7.77 0.00 0.00 64027.82 8771.74 58478.28 00:07:00.363 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0xa0000 length 0xa0000 00:07:00.363 Nvme1n1 : 5.05 1952.08 7.63 0.00 0.00 65254.09 13006.38 62107.96 00:07:00.363 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0x80000 00:07:00.363 Nvme2n1 : 5.07 1995.95 7.80 0.00 0.00 63816.17 9477.51 56461.78 00:07:00.363 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x80000 length 0x80000 00:07:00.363 Nvme2n1 : 5.05 1950.80 7.62 0.00 0.00 65156.77 12703.90 62107.96 00:07:00.363 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0x80000 00:07:00.363 Nvme2n2 : 5.07 1995.37 7.79 0.00 0.00 63726.44 9830.40 56865.08 00:07:00.363 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x80000 length 0x80000 00:07:00.363 Nvme2n2 : 5.06 1949.56 7.62 0.00 0.00 65049.26 11947.72 66140.95 00:07:00.363 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0x80000 00:07:00.363 Nvme2n3 : 5.07 1994.79 7.79 0.00 0.00 63637.88 10132.87 57268.38 00:07:00.363 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x80000 length 0x80000 00:07:00.363 Nvme2n3 : 5.08 1967.24 7.68 0.00 0.00 64432.04 6503.19 68964.04 00:07:00.363 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x0 length 0x20000 00:07:00.363 Nvme3n1 : 5.07 1994.26 7.79 0.00 0.00 63567.58 10536.17 61301.37 00:07:00.363 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:00.363 Verification LBA range: start 0x20000 length 0x20000 00:07:00.363 Nvme3n1 : 5.08 1966.70 7.68 0.00 0.00 64380.57 6704.84 72997.02 00:07:00.363 [2024-11-05T11:22:59.637Z] =================================================================================================================== 00:07:00.363 [2024-11-05T11:22:59.637Z] Total : 23698.15 92.57 0.00 0.00 64368.12 6427.57 72997.02 00:07:00.929 00:07:00.929 real 0m6.766s 00:07:00.929 user 0m12.665s 00:07:00.929 sys 0m0.196s 00:07:00.929 11:23:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:00.929 11:23:00 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:00.929 ************************************ 00:07:00.929 END TEST bdev_verify 00:07:00.929 ************************************ 00:07:01.188 11:23:00 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:01.188 11:23:00 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:01.188 11:23:00 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:01.188 11:23:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.188 ************************************ 00:07:01.188 START TEST bdev_verify_big_io 00:07:01.188 ************************************ 00:07:01.188 11:23:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:01.188 [2024-11-05 11:23:00.301046] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:01.188 [2024-11-05 11:23:00.301161] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60423 ] 00:07:01.188 [2024-11-05 11:23:00.460935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:01.446 [2024-11-05 11:23:00.559467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.446 [2024-11-05 11:23:00.559638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.011 Running I/O for 5 seconds... 00:07:05.851 1341.00 IOPS, 83.81 MiB/s [2024-11-05T11:23:07.022Z] 1875.50 IOPS, 117.22 MiB/s [2024-11-05T11:23:07.281Z] 2147.00 IOPS, 134.19 MiB/s 00:07:08.007 Latency(us) 00:07:08.007 [2024-11-05T11:23:07.281Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.007 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0xbd0b 00:07:08.007 Nvme0n1 : 5.71 127.64 7.98 0.00 0.00 949025.42 9679.16 1096971.82 00:07:08.007 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:08.007 Nvme0n1 : 5.81 132.17 8.26 0.00 0.00 945194.01 15224.52 1045349.61 00:07:08.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0xa000 00:07:08.007 Nvme1n1 : 5.61 127.36 7.96 0.00 0.00 927509.64 79853.10 1084066.26 00:07:08.007 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0xa000 length 0xa000 00:07:08.007 Nvme1n1 : 5.81 128.01 8.00 0.00 0.00 931827.87 49000.76 993727.41 00:07:08.007 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0x8000 00:07:08.007 Nvme2n1 : 5.76 130.31 8.14 0.00 0.00 887118.96 41136.44 1690627.15 00:07:08.007 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x8000 length 0x8000 00:07:08.007 Nvme2n1 : 5.87 131.41 8.21 0.00 0.00 881740.16 52428.80 909841.33 00:07:08.007 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0x8000 00:07:08.007 Nvme2n2 : 5.86 135.40 8.46 0.00 0.00 822170.45 37708.41 1729343.80 00:07:08.007 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x8000 length 0x8000 00:07:08.007 Nvme2n2 : 5.82 132.07 8.25 0.00 0.00 857843.13 52832.10 884030.23 00:07:08.007 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0x8000 00:07:08.007 Nvme2n3 : 5.91 148.94 9.31 0.00 0.00 728437.80 22181.42 1742249.35 00:07:08.007 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x8000 length 0x8000 00:07:08.007 Nvme2n3 : 5.87 134.44 8.40 0.00 0.00 814193.40 51218.90 967916.31 00:07:08.007 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x0 length 0x2000 00:07:08.007 Nvme3n1 : 5.98 195.72 12.23 0.00 0.00 539809.92 110.28 1780966.01 00:07:08.007 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:08.007 Verification LBA range: start 0x2000 length 0x2000 00:07:08.007 Nvme3n1 : 5.92 154.59 9.66 0.00 0.00 693330.26 790.84 1071160.71 00:07:08.007 [2024-11-05T11:23:07.281Z] =================================================================================================================== 00:07:08.007 [2024-11-05T11:23:07.281Z] Total : 1678.04 104.88 0.00 0.00 814866.66 110.28 1780966.01 00:07:08.941 00:07:08.941 real 0m7.712s 00:07:08.941 user 0m14.561s 00:07:08.941 sys 0m0.220s 00:07:08.941 11:23:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:08.941 11:23:07 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:08.941 ************************************ 00:07:08.941 END TEST bdev_verify_big_io 00:07:08.941 ************************************ 00:07:08.941 11:23:07 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.941 11:23:07 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:08.941 11:23:07 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:08.941 11:23:07 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:08.941 ************************************ 00:07:08.941 START TEST bdev_write_zeroes 00:07:08.941 ************************************ 00:07:08.941 11:23:07 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:08.941 [2024-11-05 11:23:08.049140] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:08.941 [2024-11-05 11:23:08.049252] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60528 ] 00:07:08.941 [2024-11-05 11:23:08.209059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.198 [2024-11-05 11:23:08.301865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.764 Running I/O for 1 seconds... 00:07:10.696 74496.00 IOPS, 291.00 MiB/s 00:07:10.696 Latency(us) 00:07:10.696 [2024-11-05T11:23:09.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:10.696 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme0n1 : 1.02 12358.51 48.28 0.00 0.00 10333.79 7561.85 20366.57 00:07:10.696 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme1n1 : 1.02 12344.36 48.22 0.00 0.00 10335.46 7612.26 20568.22 00:07:10.696 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme2n1 : 1.02 12330.33 48.17 0.00 0.00 10320.56 7612.26 19963.27 00:07:10.696 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme2n2 : 1.02 12316.32 48.11 0.00 0.00 10308.30 7662.67 19257.50 00:07:10.696 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme2n3 : 1.02 12302.44 48.06 0.00 0.00 10293.18 7612.26 18753.38 00:07:10.696 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:10.696 Nvme3n1 : 1.03 12288.62 48.00 0.00 0.00 10279.38 7461.02 20467.40 00:07:10.696 [2024-11-05T11:23:09.970Z] =================================================================================================================== 00:07:10.696 [2024-11-05T11:23:09.970Z] Total : 73940.59 288.83 0.00 0.00 10311.78 7461.02 20568.22 00:07:11.627 00:07:11.627 real 0m2.641s 00:07:11.627 user 0m2.338s 00:07:11.627 sys 0m0.190s 00:07:11.627 11:23:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.627 ************************************ 00:07:11.627 END TEST bdev_write_zeroes 00:07:11.627 ************************************ 00:07:11.627 11:23:10 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:11.627 11:23:10 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.627 11:23:10 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:11.627 11:23:10 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.627 11:23:10 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:11.627 ************************************ 00:07:11.627 START TEST bdev_json_nonenclosed 00:07:11.627 ************************************ 00:07:11.627 11:23:10 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:11.627 [2024-11-05 11:23:10.745432] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:11.627 [2024-11-05 11:23:10.745560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60581 ] 00:07:11.886 [2024-11-05 11:23:10.907077] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.886 [2024-11-05 11:23:11.002017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.886 [2024-11-05 11:23:11.002087] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:11.886 [2024-11-05 11:23:11.002103] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:11.886 [2024-11-05 11:23:11.002113] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.143 00:07:12.143 real 0m0.493s 00:07:12.143 user 0m0.304s 00:07:12.143 sys 0m0.085s 00:07:12.143 11:23:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.143 11:23:11 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:12.143 ************************************ 00:07:12.143 END TEST bdev_json_nonenclosed 00:07:12.143 ************************************ 00:07:12.143 11:23:11 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.143 11:23:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:12.143 11:23:11 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.143 11:23:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.143 ************************************ 00:07:12.143 START TEST bdev_json_nonarray 00:07:12.143 ************************************ 00:07:12.143 11:23:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:12.143 [2024-11-05 11:23:11.293561] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:12.143 [2024-11-05 11:23:11.293688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60607 ] 00:07:12.401 [2024-11-05 11:23:11.452953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.401 [2024-11-05 11:23:11.553531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.401 [2024-11-05 11:23:11.553623] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:12.401 [2024-11-05 11:23:11.553640] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:12.401 [2024-11-05 11:23:11.553649] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:12.659 00:07:12.659 real 0m0.498s 00:07:12.659 user 0m0.308s 00:07:12.659 sys 0m0.086s 00:07:12.659 11:23:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.659 11:23:11 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:12.659 ************************************ 00:07:12.659 END TEST bdev_json_nonarray 00:07:12.659 ************************************ 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:12.659 11:23:11 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:12.659 00:07:12.659 real 0m38.475s 00:07:12.659 user 1m3.756s 00:07:12.659 sys 0m4.839s 00:07:12.659 ************************************ 00:07:12.659 END TEST blockdev_nvme 00:07:12.659 ************************************ 00:07:12.659 11:23:11 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:12.659 11:23:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:12.659 11:23:11 -- spdk/autotest.sh@209 -- # uname -s 00:07:12.659 11:23:11 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:12.659 11:23:11 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:12.659 11:23:11 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:12.659 11:23:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:12.659 11:23:11 -- common/autotest_common.sh@10 -- # set +x 00:07:12.659 ************************************ 00:07:12.659 START TEST blockdev_nvme_gpt 00:07:12.659 ************************************ 00:07:12.659 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:12.659 * Looking for test storage... 00:07:12.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:12.659 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:12.659 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:07:12.659 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:12.917 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:12.917 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:12.918 11:23:11 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:12.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:12.918 --rc genhtml_branch_coverage=1 00:07:12.918 --rc genhtml_function_coverage=1 00:07:12.918 --rc genhtml_legend=1 00:07:12.918 --rc geninfo_all_blocks=1 00:07:12.918 --rc geninfo_unexecuted_blocks=1 00:07:12.918 00:07:12.918 ' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60685 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 60685 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 60685 ']' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:12.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:12.918 11:23:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:12.918 11:23:11 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:12.918 [2024-11-05 11:23:12.065249] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:12.918 [2024-11-05 11:23:12.065373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60685 ] 00:07:13.176 [2024-11-05 11:23:12.222182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.176 [2024-11-05 11:23:12.319921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.741 11:23:12 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:13.741 11:23:12 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:07:13.741 11:23:12 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:13.741 11:23:12 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:13.741 11:23:12 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:13.999 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.267 Waiting for block devices as requested 00:07:14.267 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.267 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.267 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.267 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:19.584 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:19.584 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:19.585 BYT; 00:07:19.585 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:19.585 BYT; 00:07:19.585 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:19.585 11:23:18 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:19.585 11:23:18 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:20.528 The operation has completed successfully. 00:07:20.528 11:23:19 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:21.468 The operation has completed successfully. 00:07:21.468 11:23:20 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:22.036 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:22.294 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.294 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.294 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.551 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:22.551 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.551 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.551 [] 00:07:22.551 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:22.551 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:22.551 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.551 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.809 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.809 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:22.809 11:23:21 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.809 11:23:21 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.809 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.809 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.809 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:22.809 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:22.809 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.809 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:23.068 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:23.068 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:23.068 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:23.069 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "431ba695-28ce-4c8d-86af-caf10ad937dd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "431ba695-28ce-4c8d-86af-caf10ad937dd",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "5ba7d428-812d-43d7-89c1-346f9a25fe54"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5ba7d428-812d-43d7-89c1-346f9a25fe54",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "94f746ba-ea44-414e-a867-b7a930455dda"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "94f746ba-ea44-414e-a867-b7a930455dda",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "fef44558-abbb-4ca1-9d53-5a9bfbe2789d"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "fef44558-abbb-4ca1-9d53-5a9bfbe2789d",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "428c5a12-2b0e-473b-b59e-3a405ed3a664"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "428c5a12-2b0e-473b-b59e-3a405ed3a664",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:23.069 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:23.069 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:23.069 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:23.069 11:23:22 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 60685 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 60685 ']' 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 60685 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60685 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.069 killing process with pid 60685 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60685' 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 60685 00:07:23.069 11:23:22 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 60685 00:07:24.447 11:23:23 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:24.447 11:23:23 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:24.447 11:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:24.447 11:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.447 11:23:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:24.447 ************************************ 00:07:24.447 START TEST bdev_hello_world 00:07:24.447 ************************************ 00:07:24.447 11:23:23 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:24.447 [2024-11-05 11:23:23.701391] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:24.447 [2024-11-05 11:23:23.701505] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61310 ] 00:07:24.705 [2024-11-05 11:23:23.856308] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.705 [2024-11-05 11:23:23.949138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.270 [2024-11-05 11:23:24.486536] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:25.270 [2024-11-05 11:23:24.486583] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:25.270 [2024-11-05 11:23:24.486600] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:25.270 [2024-11-05 11:23:24.489013] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:25.270 [2024-11-05 11:23:24.489405] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:25.270 [2024-11-05 11:23:24.489433] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:25.270 [2024-11-05 11:23:24.489647] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:25.270 00:07:25.270 [2024-11-05 11:23:24.489675] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:26.203 00:07:26.203 real 0m1.542s 00:07:26.203 user 0m1.265s 00:07:26.203 sys 0m0.171s 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:26.203 ************************************ 00:07:26.203 END TEST bdev_hello_world 00:07:26.203 ************************************ 00:07:26.203 11:23:25 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:26.203 11:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:26.203 11:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:26.203 11:23:25 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:26.203 ************************************ 00:07:26.203 START TEST bdev_bounds 00:07:26.203 ************************************ 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61345 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:26.203 Process bdevio pid: 61345 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61345' 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61345 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61345 ']' 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:26.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:26.203 11:23:25 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:26.203 [2024-11-05 11:23:25.276758] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:26.203 [2024-11-05 11:23:25.277174] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61345 ] 00:07:26.204 [2024-11-05 11:23:25.437322] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:26.461 [2024-11-05 11:23:25.536034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.461 [2024-11-05 11:23:25.536244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.461 [2024-11-05 11:23:25.536266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:27.026 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:27.026 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:27.026 11:23:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:27.026 I/O targets: 00:07:27.026 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:27.026 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:27.026 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:27.026 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.026 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.027 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:27.027 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:27.027 00:07:27.027 00:07:27.027 CUnit - A unit testing framework for C - Version 2.1-3 00:07:27.027 http://cunit.sourceforge.net/ 00:07:27.027 00:07:27.027 00:07:27.027 Suite: bdevio tests on: Nvme3n1 00:07:27.027 Test: blockdev write read block ...passed 00:07:27.027 Test: blockdev write zeroes read block ...passed 00:07:27.027 Test: blockdev write zeroes read no split ...passed 00:07:27.027 Test: blockdev write zeroes read split ...passed 00:07:27.027 Test: blockdev write zeroes read split partial ...passed 00:07:27.027 Test: blockdev reset ...[2024-11-05 11:23:26.292423] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:27.027 [2024-11-05 11:23:26.296691] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:27.027 passed 00:07:27.027 Test: blockdev write read 8 blocks ...passed 00:07:27.027 Test: blockdev write read size > 128k ...passed 00:07:27.027 Test: blockdev write read invalid size ...passed 00:07:27.027 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.027 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.027 Test: blockdev write read max offset ...passed 00:07:27.027 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.027 Test: blockdev writev readv 8 blocks ...passed 00:07:27.027 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.027 Test: blockdev writev readv block ...passed 00:07:27.027 Test: blockdev writev readv size > 128k ...passed 00:07:27.027 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.027 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.303502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd404000 len:0x1000 00:07:27.027 [2024-11-05 11:23:26.303626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.284 passed 00:07:27.284 Test: blockdev nvme passthru rw ...passed 00:07:27.284 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:23:26.304548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.284 [2024-11-05 11:23:26.304637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0passed 00:07:27.284 Test: blockdev nvme admin passthru ... sqhd:001c p:1 m:0 dnr:1 00:07:27.284 passed 00:07:27.284 Test: blockdev copy ...passed 00:07:27.284 Suite: bdevio tests on: Nvme2n3 00:07:27.284 Test: blockdev write read block ...passed 00:07:27.284 Test: blockdev write zeroes read block ...passed 00:07:27.284 Test: blockdev write zeroes read no split ...passed 00:07:27.284 Test: blockdev write zeroes read split ...passed 00:07:27.284 Test: blockdev write zeroes read split partial ...passed 00:07:27.284 Test: blockdev reset ...[2024-11-05 11:23:26.361844] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:27.284 [2024-11-05 11:23:26.364944] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:27.284 passed 00:07:27.284 Test: blockdev write read 8 blocks ...passed 00:07:27.284 Test: blockdev write read size > 128k ...passed 00:07:27.284 Test: blockdev write read invalid size ...passed 00:07:27.284 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.284 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.284 Test: blockdev write read max offset ...passed 00:07:27.284 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.284 Test: blockdev writev readv 8 blocks ...passed 00:07:27.284 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.284 Test: blockdev writev readv block ...passed 00:07:27.284 Test: blockdev writev readv size > 128k ...passed 00:07:27.284 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.284 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.371728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2cd402000 len:0x1000 00:07:27.284 [2024-11-05 11:23:26.371767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.284 passed 00:07:27.284 Test: blockdev nvme passthru rw ...passed 00:07:27.284 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:23:26.372575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.284 [2024-11-05 11:23:26.372602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.284 passed 00:07:27.284 Test: blockdev nvme admin passthru ...passed 00:07:27.284 Test: blockdev copy ...passed 00:07:27.284 Suite: bdevio tests on: Nvme2n2 00:07:27.284 Test: blockdev write read block ...passed 00:07:27.285 Test: blockdev write zeroes read block ...passed 00:07:27.285 Test: blockdev write zeroes read no split ...passed 00:07:27.285 Test: blockdev write zeroes read split ...passed 00:07:27.285 Test: blockdev write zeroes read split partial ...passed 00:07:27.285 Test: blockdev reset ...[2024-11-05 11:23:26.428291] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:27.285 [2024-11-05 11:23:26.431396] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:27.285 passed 00:07:27.285 Test: blockdev write read 8 blocks ...passed 00:07:27.285 Test: blockdev write read size > 128k ...passed 00:07:27.285 Test: blockdev write read invalid size ...passed 00:07:27.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.285 Test: blockdev write read max offset ...passed 00:07:27.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.285 Test: blockdev writev readv 8 blocks ...passed 00:07:27.285 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.285 Test: blockdev writev readv block ...passed 00:07:27.285 Test: blockdev writev readv size > 128k ...passed 00:07:27.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.285 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.438124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db038000 len:0x1000 00:07:27.285 passed 00:07:27.285 Test: blockdev nvme passthru rw ...[2024-11-05 11:23:26.438162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.285 passed 00:07:27.285 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:23:26.438758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.285 passed 00:07:27.285 Test: blockdev nvme admin passthru ...[2024-11-05 11:23:26.438783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.285 passed 00:07:27.285 Test: blockdev copy ...passed 00:07:27.285 Suite: bdevio tests on: Nvme2n1 00:07:27.285 Test: blockdev write read block ...passed 00:07:27.285 Test: blockdev write zeroes read block ...passed 00:07:27.285 Test: blockdev write zeroes read no split ...passed 00:07:27.285 Test: blockdev write zeroes read split ...passed 00:07:27.285 Test: blockdev write zeroes read split partial ...passed 00:07:27.285 Test: blockdev reset ...[2024-11-05 11:23:26.494246] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:27.285 [2024-11-05 11:23:26.497172] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:27.285 passed 00:07:27.285 Test: blockdev write read 8 blocks ...passed 00:07:27.285 Test: blockdev write read size > 128k ...passed 00:07:27.285 Test: blockdev write read invalid size ...passed 00:07:27.285 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.285 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.285 Test: blockdev write read max offset ...passed 00:07:27.285 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.285 Test: blockdev writev readv 8 blocks ...passed 00:07:27.285 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.285 Test: blockdev writev readv block ...passed 00:07:27.285 Test: blockdev writev readv size > 128k ...passed 00:07:27.285 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.285 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.503981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2db034000 len:0x1000 00:07:27.285 [2024-11-05 11:23:26.504018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.285 passed 00:07:27.285 Test: blockdev nvme passthru rw ...passed 00:07:27.285 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:23:26.504789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:27.285 [2024-11-05 11:23:26.504825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:27.285 passed 00:07:27.285 Test: blockdev nvme admin passthru ...passed 00:07:27.285 Test: blockdev copy ...passed 00:07:27.285 Suite: bdevio tests on: Nvme1n1p2 00:07:27.285 Test: blockdev write read block ...passed 00:07:27.285 Test: blockdev write zeroes read block ...passed 00:07:27.285 Test: blockdev write zeroes read no split ...passed 00:07:27.285 Test: blockdev write zeroes read split ...passed 00:07:27.285 Test: blockdev write zeroes read split partial ...passed 00:07:27.285 Test: blockdev reset ...[2024-11-05 11:23:26.560407] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:27.543 [2024-11-05 11:23:26.563361] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:27.543 passed 00:07:27.543 Test: blockdev write read 8 blocks ...passed 00:07:27.543 Test: blockdev write read size > 128k ...passed 00:07:27.543 Test: blockdev write read invalid size ...passed 00:07:27.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.543 Test: blockdev write read max offset ...passed 00:07:27.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.543 Test: blockdev writev readv 8 blocks ...passed 00:07:27.543 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.543 Test: blockdev writev readv block ...passed 00:07:27.543 Test: blockdev writev readv size > 128k ...passed 00:07:27.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.543 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.570094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2db030000 len:0x1000 00:07:27.543 [2024-11-05 11:23:26.570130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.543 passed 00:07:27.543 Test: blockdev nvme passthru rw ...passed 00:07:27.543 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.543 Test: blockdev nvme admin passthru ...passed 00:07:27.543 Test: blockdev copy ...passed 00:07:27.543 Suite: bdevio tests on: Nvme1n1p1 00:07:27.543 Test: blockdev write read block ...passed 00:07:27.543 Test: blockdev write zeroes read block ...passed 00:07:27.543 Test: blockdev write zeroes read no split ...passed 00:07:27.543 Test: blockdev write zeroes read split ...passed 00:07:27.543 Test: blockdev write zeroes read split partial ...passed 00:07:27.543 Test: blockdev reset ...[2024-11-05 11:23:26.613159] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:27.543 [2024-11-05 11:23:26.615779] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:27.543 passed 00:07:27.543 Test: blockdev write read 8 blocks ...passed 00:07:27.543 Test: blockdev write read size > 128k ...passed 00:07:27.543 Test: blockdev write read invalid size ...passed 00:07:27.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.543 Test: blockdev write read max offset ...passed 00:07:27.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.543 Test: blockdev writev readv 8 blocks ...passed 00:07:27.543 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.543 Test: blockdev writev readv block ...passed 00:07:27.543 Test: blockdev writev readv size > 128k ...passed 00:07:27.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.543 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.622669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2cd20e000 len:0x1000 00:07:27.543 [2024-11-05 11:23:26.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:27.543 passed 00:07:27.543 Test: blockdev nvme passthru rw ...passed 00:07:27.543 Test: blockdev nvme passthru vendor specific ...passed 00:07:27.543 Test: blockdev nvme admin passthru ...passed 00:07:27.543 Test: blockdev copy ...passed 00:07:27.543 Suite: bdevio tests on: Nvme0n1 00:07:27.543 Test: blockdev write read block ...passed 00:07:27.543 Test: blockdev write zeroes read block ...passed 00:07:27.543 Test: blockdev write zeroes read no split ...passed 00:07:27.543 Test: blockdev write zeroes read split ...passed 00:07:27.543 Test: blockdev write zeroes read split partial ...passed 00:07:27.543 Test: blockdev reset ...[2024-11-05 11:23:26.664008] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:27.543 [2024-11-05 11:23:26.667684] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:27.543 passed 00:07:27.543 Test: blockdev write read 8 blocks ...passed 00:07:27.543 Test: blockdev write read size > 128k ...passed 00:07:27.543 Test: blockdev write read invalid size ...passed 00:07:27.543 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:27.543 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:27.543 Test: blockdev write read max offset ...passed 00:07:27.543 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:27.543 Test: blockdev writev readv 8 blocks ...passed 00:07:27.543 Test: blockdev writev readv 30 x 1block ...passed 00:07:27.543 Test: blockdev writev readv block ...passed 00:07:27.543 Test: blockdev writev readv size > 128k ...passed 00:07:27.543 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:27.543 Test: blockdev comparev and writev ...[2024-11-05 11:23:26.673680] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:27.543 separate metadata which is not supported yet. 00:07:27.543 passed 00:07:27.543 Test: blockdev nvme passthru rw ...passed 00:07:27.543 Test: blockdev nvme passthru vendor specific ...[2024-11-05 11:23:26.674200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:27.543 [2024-11-05 11:23:26.674236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:27.543 passed 00:07:27.543 Test: blockdev nvme admin passthru ...passed 00:07:27.543 Test: blockdev copy ...passed 00:07:27.543 00:07:27.543 Run Summary: Type Total Ran Passed Failed Inactive 00:07:27.543 suites 7 7 n/a 0 0 00:07:27.543 tests 161 161 161 0 0 00:07:27.543 asserts 1025 1025 1025 0 n/a 00:07:27.543 00:07:27.543 Elapsed time = 1.148 seconds 00:07:27.543 0 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61345 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61345 ']' 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61345 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61345 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61345' 00:07:27.543 killing process with pid 61345 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61345 00:07:27.543 11:23:26 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61345 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:28.475 00:07:28.475 real 0m2.173s 00:07:28.475 user 0m5.554s 00:07:28.475 sys 0m0.285s 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:28.475 ************************************ 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:28.475 END TEST bdev_bounds 00:07:28.475 ************************************ 00:07:28.475 11:23:27 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:28.475 11:23:27 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:28.475 11:23:27 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:28.475 11:23:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:28.475 ************************************ 00:07:28.475 START TEST bdev_nbd 00:07:28.475 ************************************ 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:28.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61400 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61400 /var/tmp/spdk-nbd.sock 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61400 ']' 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:28.475 11:23:27 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:28.475 [2024-11-05 11:23:27.489431] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:28.475 [2024-11-05 11:23:27.489523] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:28.475 [2024-11-05 11:23:27.645162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.475 [2024-11-05 11:23:27.744481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.408 1+0 records in 00:07:29.408 1+0 records out 00:07:29.408 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368872 s, 11.1 MB/s 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.408 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.666 1+0 records in 00:07:29.666 1+0 records out 00:07:29.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040815 s, 10.0 MB/s 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.666 11:23:28 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:29.924 1+0 records in 00:07:29.924 1+0 records out 00:07:29.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427196 s, 9.6 MB/s 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:29.924 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.182 1+0 records in 00:07:30.182 1+0 records out 00:07:30.182 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439887 s, 9.3 MB/s 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.182 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:30.439 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:30.439 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:30.439 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.440 1+0 records in 00:07:30.440 1+0 records out 00:07:30.440 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000442675 s, 9.3 MB/s 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.440 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.697 1+0 records in 00:07:30.697 1+0 records out 00:07:30.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000368866 s, 11.1 MB/s 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.697 1+0 records in 00:07:30.697 1+0 records out 00:07:30.697 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000523294 s, 7.8 MB/s 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:30.697 11:23:29 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd0", 00:07:30.956 "bdev_name": "Nvme0n1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd1", 00:07:30.956 "bdev_name": "Nvme1n1p1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd2", 00:07:30.956 "bdev_name": "Nvme1n1p2" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd3", 00:07:30.956 "bdev_name": "Nvme2n1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd4", 00:07:30.956 "bdev_name": "Nvme2n2" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd5", 00:07:30.956 "bdev_name": "Nvme2n3" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd6", 00:07:30.956 "bdev_name": "Nvme3n1" 00:07:30.956 } 00:07:30.956 ]' 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd0", 00:07:30.956 "bdev_name": "Nvme0n1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd1", 00:07:30.956 "bdev_name": "Nvme1n1p1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd2", 00:07:30.956 "bdev_name": "Nvme1n1p2" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd3", 00:07:30.956 "bdev_name": "Nvme2n1" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd4", 00:07:30.956 "bdev_name": "Nvme2n2" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd5", 00:07:30.956 "bdev_name": "Nvme2n3" 00:07:30.956 }, 00:07:30.956 { 00:07:30.956 "nbd_device": "/dev/nbd6", 00:07:30.956 "bdev_name": "Nvme3n1" 00:07:30.956 } 00:07:30.956 ]' 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:30.956 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.214 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.482 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.740 11:23:30 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:31.999 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:32.256 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.514 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:32.772 11:23:31 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:33.030 /dev/nbd0 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.030 1+0 records in 00:07:33.030 1+0 records out 00:07:33.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328983 s, 12.5 MB/s 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.030 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:33.289 /dev/nbd1 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.289 1+0 records in 00:07:33.289 1+0 records out 00:07:33.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000359019 s, 11.4 MB/s 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.289 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:33.547 /dev/nbd10 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:33.547 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.547 1+0 records in 00:07:33.547 1+0 records out 00:07:33.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042772 s, 9.6 MB/s 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:33.548 /dev/nbd11 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.548 1+0 records in 00:07:33.548 1+0 records out 00:07:33.548 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000504864 s, 8.1 MB/s 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:33.548 11:23:32 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:33.807 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.807 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.807 11:23:32 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:33.807 /dev/nbd12 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:33.807 1+0 records in 00:07:33.807 1+0 records out 00:07:33.807 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000403028 s, 10.2 MB/s 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:33.807 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:34.065 /dev/nbd13 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.065 1+0 records in 00:07:34.065 1+0 records out 00:07:34.065 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000387084 s, 10.6 MB/s 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:34.065 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:34.323 /dev/nbd14 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:34.323 1+0 records in 00:07:34.323 1+0 records out 00:07:34.323 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000478791 s, 8.6 MB/s 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:34.323 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:34.581 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd0", 00:07:34.581 "bdev_name": "Nvme0n1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd1", 00:07:34.581 "bdev_name": "Nvme1n1p1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd10", 00:07:34.581 "bdev_name": "Nvme1n1p2" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd11", 00:07:34.581 "bdev_name": "Nvme2n1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd12", 00:07:34.581 "bdev_name": "Nvme2n2" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd13", 00:07:34.581 "bdev_name": "Nvme2n3" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd14", 00:07:34.581 "bdev_name": "Nvme3n1" 00:07:34.581 } 00:07:34.581 ]' 00:07:34.581 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd0", 00:07:34.581 "bdev_name": "Nvme0n1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd1", 00:07:34.581 "bdev_name": "Nvme1n1p1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd10", 00:07:34.581 "bdev_name": "Nvme1n1p2" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd11", 00:07:34.581 "bdev_name": "Nvme2n1" 00:07:34.581 }, 00:07:34.581 { 00:07:34.581 "nbd_device": "/dev/nbd12", 00:07:34.582 "bdev_name": "Nvme2n2" 00:07:34.582 }, 00:07:34.582 { 00:07:34.582 "nbd_device": "/dev/nbd13", 00:07:34.582 "bdev_name": "Nvme2n3" 00:07:34.582 }, 00:07:34.582 { 00:07:34.582 "nbd_device": "/dev/nbd14", 00:07:34.582 "bdev_name": "Nvme3n1" 00:07:34.582 } 00:07:34.582 ]' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:34.582 /dev/nbd1 00:07:34.582 /dev/nbd10 00:07:34.582 /dev/nbd11 00:07:34.582 /dev/nbd12 00:07:34.582 /dev/nbd13 00:07:34.582 /dev/nbd14' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:34.582 /dev/nbd1 00:07:34.582 /dev/nbd10 00:07:34.582 /dev/nbd11 00:07:34.582 /dev/nbd12 00:07:34.582 /dev/nbd13 00:07:34.582 /dev/nbd14' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:34.582 256+0 records in 00:07:34.582 256+0 records out 00:07:34.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100407 s, 104 MB/s 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:34.582 256+0 records in 00:07:34.582 256+0 records out 00:07:34.582 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0613736 s, 17.1 MB/s 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.582 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:34.840 256+0 records in 00:07:34.840 256+0 records out 00:07:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0657472 s, 15.9 MB/s 00:07:34.840 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.840 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:34.840 256+0 records in 00:07:34.840 256+0 records out 00:07:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0647016 s, 16.2 MB/s 00:07:34.840 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.840 11:23:33 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:34.840 256+0 records in 00:07:34.840 256+0 records out 00:07:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0642567 s, 16.3 MB/s 00:07:34.840 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.840 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:34.840 256+0 records in 00:07:34.840 256+0 records out 00:07:34.840 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0604306 s, 17.4 MB/s 00:07:34.840 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:34.840 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:35.099 256+0 records in 00:07:35.099 256+0 records out 00:07:35.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0621704 s, 16.9 MB/s 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:35.099 256+0 records in 00:07:35.099 256+0 records out 00:07:35.099 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609447 s, 17.2 MB/s 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.099 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.357 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.615 11:23:34 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.873 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:36.131 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.132 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.132 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.132 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:36.390 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.648 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:36.907 11:23:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:36.907 malloc_lvol_verify 00:07:36.907 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:37.165 72fb6017-9f52-45ae-b5bb-834a5930dbc1 00:07:37.165 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:37.423 7cc5c655-2018-44f6-8128-6a847ae192a9 00:07:37.423 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:37.681 /dev/nbd0 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:37.681 mke2fs 1.47.0 (5-Feb-2023) 00:07:37.681 Discarding device blocks: 0/4096 done 00:07:37.681 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:37.681 00:07:37.681 Allocating group tables: 0/1 done 00:07:37.681 Writing inode tables: 0/1 done 00:07:37.681 Creating journal (1024 blocks): done 00:07:37.681 Writing superblocks and filesystem accounting information: 0/1 done 00:07:37.681 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.681 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61400 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61400 ']' 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61400 00:07:37.939 11:23:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61400 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.939 killing process with pid 61400 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61400' 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61400 00:07:37.939 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61400 00:07:38.508 11:23:37 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:38.508 00:07:38.508 real 0m10.208s 00:07:38.508 user 0m14.769s 00:07:38.508 sys 0m3.327s 00:07:38.508 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.508 11:23:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 ************************************ 00:07:38.508 END TEST bdev_nbd 00:07:38.508 ************************************ 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:38.508 skipping fio tests on NVMe due to multi-ns failures. 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:38.508 11:23:37 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.508 11:23:37 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:38.508 11:23:37 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.508 11:23:37 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.508 ************************************ 00:07:38.508 START TEST bdev_verify 00:07:38.508 ************************************ 00:07:38.508 11:23:37 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.508 [2024-11-05 11:23:37.734400] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:38.508 [2024-11-05 11:23:37.734513] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61805 ] 00:07:38.766 [2024-11-05 11:23:37.888472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:38.766 [2024-11-05 11:23:37.966758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.766 [2024-11-05 11:23:37.966851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.333 Running I/O for 5 seconds... 00:07:41.637 20480.00 IOPS, 80.00 MiB/s [2024-11-05T11:23:41.845Z] 20992.00 IOPS, 82.00 MiB/s [2024-11-05T11:23:42.821Z] 20821.33 IOPS, 81.33 MiB/s [2024-11-05T11:23:43.755Z] 20992.00 IOPS, 82.00 MiB/s [2024-11-05T11:23:43.755Z] 21452.80 IOPS, 83.80 MiB/s 00:07:44.481 Latency(us) 00:07:44.481 [2024-11-05T11:23:43.755Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:44.481 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x0 length 0xbd0bd 00:07:44.481 Nvme0n1 : 5.07 1503.42 5.87 0.00 0.00 84653.61 9931.22 81869.59 00:07:44.481 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:44.481 Nvme0n1 : 5.05 1495.33 5.84 0.00 0.00 85194.37 16031.11 88322.36 00:07:44.481 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x0 length 0x4ff80 00:07:44.481 Nvme1n1p1 : 5.08 1511.23 5.90 0.00 0.00 84388.79 13006.38 77030.01 00:07:44.481 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:44.481 Nvme1n1p1 : 5.08 1499.81 5.86 0.00 0.00 84708.48 6654.42 75416.81 00:07:44.481 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x0 length 0x4ff7f 00:07:44.481 Nvme1n1p2 : 5.08 1510.77 5.90 0.00 0.00 84271.47 13308.85 72190.42 00:07:44.481 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:44.481 Nvme1n1p2 : 5.09 1508.78 5.89 0.00 0.00 84178.87 9326.28 70173.93 00:07:44.481 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x0 length 0x80000 00:07:44.481 Nvme2n1 : 5.08 1510.38 5.90 0.00 0.00 84126.56 12905.55 69367.34 00:07:44.481 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.481 Verification LBA range: start 0x80000 length 0x80000 00:07:44.482 Nvme2n1 : 5.09 1508.39 5.89 0.00 0.00 83997.88 9679.16 67754.14 00:07:44.482 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x0 length 0x80000 00:07:44.482 Nvme2n2 : 5.09 1509.99 5.90 0.00 0.00 83970.09 12552.66 70577.23 00:07:44.482 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x80000 length 0x80000 00:07:44.482 Nvme2n2 : 5.09 1507.99 5.89 0.00 0.00 83807.21 9931.22 69770.63 00:07:44.482 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x0 length 0x80000 00:07:44.482 Nvme2n3 : 5.09 1509.56 5.90 0.00 0.00 83794.39 12653.49 75013.51 00:07:44.482 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x80000 length 0x80000 00:07:44.482 Nvme2n3 : 5.09 1507.58 5.89 0.00 0.00 83641.35 10284.11 73803.62 00:07:44.482 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x0 length 0x20000 00:07:44.482 Nvme3n1 : 5.09 1509.14 5.90 0.00 0.00 83606.39 8922.98 78643.20 00:07:44.482 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:44.482 Verification LBA range: start 0x20000 length 0x20000 00:07:44.482 Nvme3n1 : 5.10 1507.20 5.89 0.00 0.00 83568.18 8721.33 78643.20 00:07:44.482 [2024-11-05T11:23:43.756Z] =================================================================================================================== 00:07:44.482 [2024-11-05T11:23:43.756Z] Total : 21099.58 82.42 0.00 0.00 84134.35 6654.42 88322.36 00:07:45.854 00:07:45.854 real 0m7.316s 00:07:45.854 user 0m13.234s 00:07:45.854 sys 0m0.198s 00:07:45.854 11:23:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:45.854 11:23:44 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:45.854 ************************************ 00:07:45.854 END TEST bdev_verify 00:07:45.854 ************************************ 00:07:45.854 11:23:45 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:45.854 11:23:45 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:45.854 11:23:45 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:45.854 11:23:45 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:45.854 ************************************ 00:07:45.854 START TEST bdev_verify_big_io 00:07:45.854 ************************************ 00:07:45.854 11:23:45 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:45.854 [2024-11-05 11:23:45.089961] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:45.854 [2024-11-05 11:23:45.090095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61898 ] 00:07:46.112 [2024-11-05 11:23:45.249728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.112 [2024-11-05 11:23:45.349695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.112 [2024-11-05 11:23:45.349711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.045 Running I/O for 5 seconds... 00:07:53.132 2135.00 IOPS, 133.44 MiB/s [2024-11-05T11:23:52.406Z] 3624.00 IOPS, 226.50 MiB/s 00:07:53.132 Latency(us) 00:07:53.132 [2024-11-05T11:23:52.406Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.132 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0xbd0b 00:07:53.132 Nvme0n1 : 5.92 100.24 6.26 0.00 0.00 1188521.54 12855.14 1561571.64 00:07:53.132 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:53.132 Nvme0n1 : 6.24 76.96 4.81 0.00 0.00 1565474.82 16031.11 2284282.49 00:07:53.132 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x4ff8 00:07:53.132 Nvme1n1p1 : 5.92 108.15 6.76 0.00 0.00 1089808.86 116149.96 1322818.95 00:07:53.132 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:53.132 Nvme1n1p1 : 6.00 106.59 6.66 0.00 0.00 1121528.91 98001.53 1129235.69 00:07:53.132 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x4ff7 00:07:53.132 Nvme1n1p2 : 6.01 110.26 6.89 0.00 0.00 1024405.16 92758.65 1090519.04 00:07:53.132 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:53.132 Nvme1n1p2 : 6.12 104.76 6.55 0.00 0.00 1081599.92 164545.77 1077613.49 00:07:53.132 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x8000 00:07:53.132 Nvme2n1 : 6.18 107.60 6.72 0.00 0.00 1007995.08 92355.35 1845493.76 00:07:53.132 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x8000 length 0x8000 00:07:53.132 Nvme2n1 : 6.12 109.29 6.83 0.00 0.00 1023032.56 112923.57 1109877.37 00:07:53.132 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x8000 00:07:53.132 Nvme2n2 : 6.24 114.89 7.18 0.00 0.00 914078.78 57671.68 1871304.86 00:07:53.132 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x8000 length 0x8000 00:07:53.132 Nvme2n2 : 6.18 113.94 7.12 0.00 0.00 955189.78 54041.99 1000180.18 00:07:53.132 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x8000 00:07:53.132 Nvme2n3 : 6.29 129.81 8.11 0.00 0.00 781621.30 9880.81 1910021.51 00:07:53.132 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x8000 length 0x8000 00:07:53.132 Nvme2n3 : 6.24 119.87 7.49 0.00 0.00 882647.53 54041.99 1019538.51 00:07:53.132 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x0 length 0x2000 00:07:53.132 Nvme3n1 : 6.36 174.81 10.93 0.00 0.00 564698.20 652.21 1922927.06 00:07:53.132 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.132 Verification LBA range: start 0x2000 length 0x2000 00:07:53.132 Nvme3n1 : 6.25 128.08 8.01 0.00 0.00 800744.60 3100.36 1096971.82 00:07:53.132 [2024-11-05T11:23:52.406Z] =================================================================================================================== 00:07:53.132 [2024-11-05T11:23:52.406Z] Total : 1605.26 100.33 0.00 0.00 960084.69 652.21 2284282.49 00:07:54.064 00:07:54.064 real 0m8.201s 00:07:54.064 user 0m15.526s 00:07:54.064 sys 0m0.221s 00:07:54.064 11:23:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:54.064 11:23:53 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:54.064 ************************************ 00:07:54.064 END TEST bdev_verify_big_io 00:07:54.064 ************************************ 00:07:54.064 11:23:53 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:54.064 11:23:53 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:54.064 11:23:53 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:54.064 11:23:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:54.064 ************************************ 00:07:54.064 START TEST bdev_write_zeroes 00:07:54.064 ************************************ 00:07:54.064 11:23:53 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:54.064 [2024-11-05 11:23:53.322719] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:54.064 [2024-11-05 11:23:53.322822] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62007 ] 00:07:54.321 [2024-11-05 11:23:53.472829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.321 [2024-11-05 11:23:53.553867] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.886 Running I/O for 1 seconds... 00:07:56.075 64512.00 IOPS, 252.00 MiB/s 00:07:56.075 Latency(us) 00:07:56.075 [2024-11-05T11:23:55.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:56.075 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme0n1 : 1.02 9190.26 35.90 0.00 0.00 13897.01 10132.87 24702.03 00:07:56.075 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme1n1p1 : 1.02 9178.94 35.86 0.00 0.00 13898.62 10183.29 24702.03 00:07:56.075 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme1n1p2 : 1.03 9167.68 35.81 0.00 0.00 13876.25 9880.81 23895.43 00:07:56.075 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme2n1 : 1.03 9157.01 35.77 0.00 0.00 13858.40 10233.70 23189.66 00:07:56.075 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme2n2 : 1.03 9146.66 35.73 0.00 0.00 13848.61 9275.86 23895.43 00:07:56.075 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme2n3 : 1.03 9136.39 35.69 0.00 0.00 13834.92 8519.68 23592.96 00:07:56.075 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:56.075 Nvme3n1 : 1.03 9126.01 35.65 0.00 0.00 13813.25 7208.96 25004.50 00:07:56.075 [2024-11-05T11:23:55.349Z] =================================================================================================================== 00:07:56.075 [2024-11-05T11:23:55.349Z] Total : 64102.95 250.40 0.00 0.00 13861.01 7208.96 25004.50 00:07:56.640 00:07:56.640 real 0m2.615s 00:07:56.640 user 0m2.324s 00:07:56.640 sys 0m0.175s 00:07:56.640 11:23:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:56.640 11:23:55 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:56.640 ************************************ 00:07:56.640 END TEST bdev_write_zeroes 00:07:56.640 ************************************ 00:07:56.898 11:23:55 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.898 11:23:55 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:56.898 11:23:55 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:56.898 11:23:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:56.898 ************************************ 00:07:56.898 START TEST bdev_json_nonenclosed 00:07:56.898 ************************************ 00:07:56.898 11:23:55 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:56.898 [2024-11-05 11:23:56.012002] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:56.898 [2024-11-05 11:23:56.012124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62060 ] 00:07:56.898 [2024-11-05 11:23:56.172436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.156 [2024-11-05 11:23:56.271349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.156 [2024-11-05 11:23:56.271430] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:57.156 [2024-11-05 11:23:56.271447] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:57.156 [2024-11-05 11:23:56.271457] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.413 00:07:57.413 real 0m0.505s 00:07:57.413 user 0m0.307s 00:07:57.413 sys 0m0.095s 00:07:57.413 11:23:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.413 ************************************ 00:07:57.413 END TEST bdev_json_nonenclosed 00:07:57.413 11:23:56 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:57.413 ************************************ 00:07:57.413 11:23:56 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.413 11:23:56 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:57.413 11:23:56 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.413 11:23:56 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.413 ************************************ 00:07:57.413 START TEST bdev_json_nonarray 00:07:57.413 ************************************ 00:07:57.413 11:23:56 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:57.413 [2024-11-05 11:23:56.580425] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:57.413 [2024-11-05 11:23:56.580543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62080 ] 00:07:57.671 [2024-11-05 11:23:56.738648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.671 [2024-11-05 11:23:56.838123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.671 [2024-11-05 11:23:56.838208] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:57.672 [2024-11-05 11:23:56.838225] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:57.672 [2024-11-05 11:23:56.838234] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:57.931 00:07:57.931 real 0m0.511s 00:07:57.931 user 0m0.324s 00:07:57.931 sys 0m0.083s 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.931 ************************************ 00:07:57.931 END TEST bdev_json_nonarray 00:07:57.931 ************************************ 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:57.931 11:23:57 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:57.931 11:23:57 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:57.931 11:23:57 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:57.931 11:23:57 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.931 11:23:57 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.931 11:23:57 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:57.931 ************************************ 00:07:57.931 START TEST bdev_gpt_uuid 00:07:57.931 ************************************ 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62111 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62111 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 62111 ']' 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:57.931 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:57.931 [2024-11-05 11:23:57.149253] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:07:57.931 [2024-11-05 11:23:57.149349] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62111 ] 00:07:58.189 [2024-11-05 11:23:57.300390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.189 [2024-11-05 11:23:57.399719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.754 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:58.754 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:07:58.754 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:58.754 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.754 11:23:57 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 Some configs were skipped because the RPC state that can call them passed over. 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.319 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:59.319 { 00:07:59.319 "name": "Nvme1n1p1", 00:07:59.319 "aliases": [ 00:07:59.319 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:59.319 ], 00:07:59.319 "product_name": "GPT Disk", 00:07:59.319 "block_size": 4096, 00:07:59.319 "num_blocks": 655104, 00:07:59.319 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:59.319 "assigned_rate_limits": { 00:07:59.319 "rw_ios_per_sec": 0, 00:07:59.319 "rw_mbytes_per_sec": 0, 00:07:59.319 "r_mbytes_per_sec": 0, 00:07:59.319 "w_mbytes_per_sec": 0 00:07:59.319 }, 00:07:59.319 "claimed": false, 00:07:59.319 "zoned": false, 00:07:59.319 "supported_io_types": { 00:07:59.319 "read": true, 00:07:59.319 "write": true, 00:07:59.319 "unmap": true, 00:07:59.319 "flush": true, 00:07:59.319 "reset": true, 00:07:59.319 "nvme_admin": false, 00:07:59.320 "nvme_io": false, 00:07:59.320 "nvme_io_md": false, 00:07:59.320 "write_zeroes": true, 00:07:59.320 "zcopy": false, 00:07:59.320 "get_zone_info": false, 00:07:59.320 "zone_management": false, 00:07:59.320 "zone_append": false, 00:07:59.320 "compare": true, 00:07:59.320 "compare_and_write": false, 00:07:59.320 "abort": true, 00:07:59.320 "seek_hole": false, 00:07:59.320 "seek_data": false, 00:07:59.320 "copy": true, 00:07:59.320 "nvme_iov_md": false 00:07:59.320 }, 00:07:59.320 "driver_specific": { 00:07:59.320 "gpt": { 00:07:59.320 "base_bdev": "Nvme1n1", 00:07:59.320 "offset_blocks": 256, 00:07:59.320 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:59.320 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:59.320 "partition_name": "SPDK_TEST_first" 00:07:59.320 } 00:07:59.320 } 00:07:59.320 } 00:07:59.320 ]' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:59.320 { 00:07:59.320 "name": "Nvme1n1p2", 00:07:59.320 "aliases": [ 00:07:59.320 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:59.320 ], 00:07:59.320 "product_name": "GPT Disk", 00:07:59.320 "block_size": 4096, 00:07:59.320 "num_blocks": 655103, 00:07:59.320 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:59.320 "assigned_rate_limits": { 00:07:59.320 "rw_ios_per_sec": 0, 00:07:59.320 "rw_mbytes_per_sec": 0, 00:07:59.320 "r_mbytes_per_sec": 0, 00:07:59.320 "w_mbytes_per_sec": 0 00:07:59.320 }, 00:07:59.320 "claimed": false, 00:07:59.320 "zoned": false, 00:07:59.320 "supported_io_types": { 00:07:59.320 "read": true, 00:07:59.320 "write": true, 00:07:59.320 "unmap": true, 00:07:59.320 "flush": true, 00:07:59.320 "reset": true, 00:07:59.320 "nvme_admin": false, 00:07:59.320 "nvme_io": false, 00:07:59.320 "nvme_io_md": false, 00:07:59.320 "write_zeroes": true, 00:07:59.320 "zcopy": false, 00:07:59.320 "get_zone_info": false, 00:07:59.320 "zone_management": false, 00:07:59.320 "zone_append": false, 00:07:59.320 "compare": true, 00:07:59.320 "compare_and_write": false, 00:07:59.320 "abort": true, 00:07:59.320 "seek_hole": false, 00:07:59.320 "seek_data": false, 00:07:59.320 "copy": true, 00:07:59.320 "nvme_iov_md": false 00:07:59.320 }, 00:07:59.320 "driver_specific": { 00:07:59.320 "gpt": { 00:07:59.320 "base_bdev": "Nvme1n1", 00:07:59.320 "offset_blocks": 655360, 00:07:59.320 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:59.320 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:59.320 "partition_name": "SPDK_TEST_second" 00:07:59.320 } 00:07:59.320 } 00:07:59.320 } 00:07:59.320 ]' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62111 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 62111 ']' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 62111 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62111 00:07:59.320 killing process with pid 62111 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62111' 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 62111 00:07:59.320 11:23:58 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 62111 00:08:01.217 ************************************ 00:08:01.217 END TEST bdev_gpt_uuid 00:08:01.217 ************************************ 00:08:01.217 00:08:01.217 real 0m2.990s 00:08:01.217 user 0m3.147s 00:08:01.217 sys 0m0.353s 00:08:01.217 11:24:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:01.217 11:24:00 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:08:01.217 11:24:00 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:01.217 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.475 Waiting for block devices as requested 00:08:01.475 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.475 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.475 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.733 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:06.997 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:06.997 11:24:05 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:06.997 11:24:05 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:06.997 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:06.997 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:06.997 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:06.997 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:06.997 11:24:06 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:06.997 00:08:06.997 real 0m54.349s 00:08:06.997 user 1m9.120s 00:08:06.997 sys 0m7.282s 00:08:06.997 11:24:06 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:06.997 ************************************ 00:08:06.997 END TEST blockdev_nvme_gpt 00:08:06.997 ************************************ 00:08:06.997 11:24:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:06.997 11:24:06 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:06.997 11:24:06 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:06.997 11:24:06 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:06.997 11:24:06 -- common/autotest_common.sh@10 -- # set +x 00:08:06.997 ************************************ 00:08:06.997 START TEST nvme 00:08:06.997 ************************************ 00:08:06.997 11:24:06 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:07.255 * Looking for test storage... 00:08:07.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:07.255 11:24:06 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:07.255 11:24:06 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:08:07.255 11:24:06 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:07.255 11:24:06 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:07.255 11:24:06 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:07.255 11:24:06 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:07.255 11:24:06 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:07.255 11:24:06 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:07.255 11:24:06 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:07.255 11:24:06 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:07.255 11:24:06 nvme -- scripts/common.sh@345 -- # : 1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:07.255 11:24:06 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:07.255 11:24:06 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@353 -- # local d=1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:07.255 11:24:06 nvme -- scripts/common.sh@355 -- # echo 1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:07.255 11:24:06 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@353 -- # local d=2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:07.255 11:24:06 nvme -- scripts/common.sh@355 -- # echo 2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:07.255 11:24:06 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:07.256 11:24:06 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:07.256 11:24:06 nvme -- scripts/common.sh@368 -- # return 0 00:08:07.256 11:24:06 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:07.256 11:24:06 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.256 --rc genhtml_branch_coverage=1 00:08:07.256 --rc genhtml_function_coverage=1 00:08:07.256 --rc genhtml_legend=1 00:08:07.256 --rc geninfo_all_blocks=1 00:08:07.256 --rc geninfo_unexecuted_blocks=1 00:08:07.256 00:08:07.256 ' 00:08:07.256 11:24:06 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.256 --rc genhtml_branch_coverage=1 00:08:07.256 --rc genhtml_function_coverage=1 00:08:07.256 --rc genhtml_legend=1 00:08:07.256 --rc geninfo_all_blocks=1 00:08:07.256 --rc geninfo_unexecuted_blocks=1 00:08:07.256 00:08:07.256 ' 00:08:07.256 11:24:06 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.256 --rc genhtml_branch_coverage=1 00:08:07.256 --rc genhtml_function_coverage=1 00:08:07.256 --rc genhtml_legend=1 00:08:07.256 --rc geninfo_all_blocks=1 00:08:07.256 --rc geninfo_unexecuted_blocks=1 00:08:07.256 00:08:07.256 ' 00:08:07.256 11:24:06 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:07.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:07.256 --rc genhtml_branch_coverage=1 00:08:07.256 --rc genhtml_function_coverage=1 00:08:07.256 --rc genhtml_legend=1 00:08:07.256 --rc geninfo_all_blocks=1 00:08:07.256 --rc geninfo_unexecuted_blocks=1 00:08:07.256 00:08:07.256 ' 00:08:07.256 11:24:06 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:07.822 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:08.080 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.080 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.338 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.338 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:08.338 11:24:07 nvme -- nvme/nvme.sh@79 -- # uname 00:08:08.338 Waiting for stub to ready for secondary processes... 00:08:08.338 11:24:07 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:08.338 11:24:07 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:08.338 11:24:07 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1073 -- # stubpid=62745 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/62745 ]] 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:08:08.338 11:24:07 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:08.338 [2024-11-05 11:24:07.479210] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:08:08.338 [2024-11-05 11:24:07.479458] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:09.272 [2024-11-05 11:24:08.239133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.272 [2024-11-05 11:24:08.336309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:09.272 [2024-11-05 11:24:08.337061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:09.272 [2024-11-05 11:24:08.337152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.272 [2024-11-05 11:24:08.357881] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:09.272 [2024-11-05 11:24:08.358369] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:09.272 [2024-11-05 11:24:08.372778] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:09.272 [2024-11-05 11:24:08.373166] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:09.272 [2024-11-05 11:24:08.376853] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:09.272 [2024-11-05 11:24:08.377216] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:09.272 [2024-11-05 11:24:08.377489] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:09.272 [2024-11-05 11:24:08.380991] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:09.272 [2024-11-05 11:24:08.381462] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:09.272 [2024-11-05 11:24:08.381714] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:09.272 [2024-11-05 11:24:08.384196] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:09.272 [2024-11-05 11:24:08.384429] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:09.272 [2024-11-05 11:24:08.384544] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:09.272 [2024-11-05 11:24:08.384595] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:09.272 [2024-11-05 11:24:08.384668] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:09.272 11:24:08 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:09.272 11:24:08 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:08:09.272 done. 00:08:09.272 11:24:08 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:09.272 11:24:08 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:08:09.272 11:24:08 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.272 11:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.272 ************************************ 00:08:09.272 START TEST nvme_reset 00:08:09.272 ************************************ 00:08:09.272 11:24:08 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:09.530 Initializing NVMe Controllers 00:08:09.530 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:09.530 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:09.530 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:09.530 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:09.530 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:09.530 00:08:09.530 real 0m0.222s 00:08:09.530 user 0m0.073s 00:08:09.530 sys 0m0.101s 00:08:09.530 ************************************ 00:08:09.530 END TEST nvme_reset 00:08:09.530 ************************************ 00:08:09.530 11:24:08 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:09.530 11:24:08 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:09.530 11:24:08 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:09.530 11:24:08 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:09.530 11:24:08 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:09.530 11:24:08 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.530 ************************************ 00:08:09.530 START TEST nvme_identify 00:08:09.530 ************************************ 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:08:09.530 11:24:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:09.530 11:24:08 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:09.530 11:24:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:09.530 11:24:08 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:09.530 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:09.791 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:09.791 11:24:08 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:09.791 11:24:08 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:09.791 [2024-11-05 11:24:08.992171] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 62766 termina===================================================== 00:08:09.792 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:09.792 ===================================================== 00:08:09.792 Controller Capabilities/Features 00:08:09.792 ================================ 00:08:09.792 Vendor ID: 1b36 00:08:09.792 Subsystem Vendor ID: 1af4 00:08:09.792 Serial Number: 12343 00:08:09.792 Model Number: QEMU NVMe Ctrl 00:08:09.792 Firmware Version: 8.0.0 00:08:09.792 Recommended Arb Burst: 6 00:08:09.792 IEEE OUI Identifier: 00 54 52 00:08:09.792 Multi-path I/O 00:08:09.792 May have multiple subsystem ports: No 00:08:09.792 May have multiple controllers: Yes 00:08:09.792 Associated with SR-IOV VF: No 00:08:09.792 Max Data Transfer Size: 524288 00:08:09.792 Max Number of Namespaces: 256 00:08:09.792 Max Number of I/O Queues: 64 00:08:09.792 NVMe Specification Version (VS): 1.4 00:08:09.792 NVMe Specification Version (Identify): 1.4 00:08:09.792 Maximum Queue Entries: 2048 00:08:09.792 Contiguous Queues Required: Yes 00:08:09.792 Arbitration Mechanisms Supported 00:08:09.792 Weighted Round Robin: Not Supported 00:08:09.792 Vendor Specific: Not Supported 00:08:09.792 Reset Timeout: 7500 ms 00:08:09.792 Doorbell Stride: 4 bytes 00:08:09.792 NVM Subsystem Reset: Not Supported 00:08:09.792 Command Sets Supported 00:08:09.792 NVM Command Set: Supported 00:08:09.792 Boot Partition: Not Supported 00:08:09.792 Memory Page Size Minimum: 4096 bytes 00:08:09.792 Memory Page Size Maximum: 65536 bytes 00:08:09.792 Persistent Memory Region: Not Supported 00:08:09.792 Optional Asynchronous Events Supported 00:08:09.792 Namespace Attribute Notices: Supported 00:08:09.792 Firmware Activation Notices: Not Supported 00:08:09.792 ANA Change Notices: Not Supported 00:08:09.792 PLE Aggregate Log Change Notices: Not Supported 00:08:09.792 LBA Status Info Alert Notices: Not Supported 00:08:09.792 EGE Aggregate Log Change Notices: Not Supported 00:08:09.792 Normal NVM Subsystem Shutdown event: Not Supported 00:08:09.792 Zone Descriptor Change Notices: Not Supported 00:08:09.792 Discovery Log Change Notices: Not Supported 00:08:09.792 Controller Attributes 00:08:09.792 128-bit Host Identifier: Not Supported 00:08:09.792 Non-Operational Permissive Mode: Not Supported 00:08:09.792 NVM Sets: Not Supported 00:08:09.792 Read Recovery Levels: Not Supported 00:08:09.792 Endurance Groups: Supported 00:08:09.792 Predictable Latency Mode: Not Supported 00:08:09.792 Traffic Based Keep ALive: Not Supported 00:08:09.792 Namespace Granularity: Not Supported 00:08:09.792 SQ Associations: Not Supported 00:08:09.792 UUID List: Not Supported 00:08:09.792 Multi-Domain Subsystem: Not Supported 00:08:09.792 Fixed Capacity Management: Not Supported 00:08:09.792 Variable Capacity Management: Not Supported 00:08:09.792 Delete Endurance Group: Not Supported 00:08:09.792 Delete NVM Set: Not Supported 00:08:09.792 Extended LBA Formats Supported: Supported 00:08:09.792 Flexible Data Placement Supported: Supported 00:08:09.792 00:08:09.792 Controller Memory Buffer Support 00:08:09.792 ================================ 00:08:09.792 Supported: No 00:08:09.792 00:08:09.792 Persistent Memory Region Support 00:08:09.792 ================================ 00:08:09.792 Supported: No 00:08:09.792 00:08:09.792 Admin Command Set Attributes 00:08:09.792 ============================ 00:08:09.792 Security Send/Receive: Not Supported 00:08:09.792 Format NVM: Supported 00:08:09.792 Firmware Activate/Download: Not Supported 00:08:09.792 Namespace Management: Supported 00:08:09.792 Device Self-Test: Not Supported 00:08:09.792 Directives: Supported 00:08:09.792 NVMe-MI: Not Supported 00:08:09.792 Virtualization Management: Not Supported 00:08:09.792 Doorbell Buffer Config: Supported 00:08:09.792 Get LBA Status Capability: Not Supported 00:08:09.792 Command & Feature Lockdown Capability: Not Supported 00:08:09.792 Abort Command Limit: 4 00:08:09.792 Async Event Request Limit: 4 00:08:09.792 Number of Firmware Slots: N/A 00:08:09.792 Firmware Slot 1 Read-Only: N/A 00:08:09.792 Firmware Activation Without Reset: N/A 00:08:09.792 Multiple Update Detection Support: N/A 00:08:09.792 Firmware Update Granularity: No Information Provided 00:08:09.792 Per-Namespace SMART Log: Yes 00:08:09.792 Asymmetric Namespace Access Log Page: Not Supported 00:08:09.792 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:09.792 Command Effects Log Page: Supported 00:08:09.792 Get Log Page Extended Data: Supported 00:08:09.792 Telemetry Log Pages: Not Supported 00:08:09.792 Persistent Event Log Pages: Not Supported 00:08:09.792 Supported Log Pages Log Page: May Support 00:08:09.792 Commands Supported & Effects Log Page: Not Supported 00:08:09.792 Feature Identifiers & Effects Log Page:May Support 00:08:09.792 NVMe-MI Commands & Effects Log Page: May Support 00:08:09.792 Data Area 4 for Telemetry Log: Not Supported 00:08:09.792 Error Log Page Entries Supported: 1 00:08:09.792 Keep Alive: Not Supported 00:08:09.792 00:08:09.792 NVM Command Set Attributes 00:08:09.792 ========================== 00:08:09.792 Submission Queue Entry Size 00:08:09.792 Max: 64 00:08:09.792 Min: 64 00:08:09.792 Completion Queue Entry Size 00:08:09.792 Max: 16 00:08:09.792 Min: 16 00:08:09.792 Number of Namespaces: 256 00:08:09.792 Compare Command: Supported 00:08:09.792 Write Uncorrectable Command: Not Supported 00:08:09.792 Dataset Management Command: Supported 00:08:09.792 Write Zeroes Command: Supported 00:08:09.792 Set Features Save Field: Supported 00:08:09.792 Reservations: Not Supported 00:08:09.792 Timestamp: Supported 00:08:09.792 Copy: Supported 00:08:09.792 Volatile Write Cache: Present 00:08:09.792 Atomic Write Unit (Normal): 1 00:08:09.792 Atomic Write Unit (PFail): 1 00:08:09.792 Atomic Compare & Write Unit: 1 00:08:09.792 Fused Compare & Write: Not Supported 00:08:09.792 Scatter-Gather List 00:08:09.792 SGL Command Set: Supported 00:08:09.792 SGL Keyed: Not Supported 00:08:09.792 SGL Bit Bucket Descriptor: Not Supported 00:08:09.792 SGL Metadata Pointer: Not Supported 00:08:09.792 Oversized SGL: Not Supported 00:08:09.792 SGL Metadata Address: Not Supported 00:08:09.792 SGL Offset: Not Supported 00:08:09.792 Transport SGL Data Block: Not Supported 00:08:09.792 Replay Protected Memory Block: Not Supported 00:08:09.792 00:08:09.792 Firmware Slot Information 00:08:09.792 ========================= 00:08:09.792 Active slot: 1 00:08:09.792 Slot 1 Firmware Revision: 1.0 00:08:09.792 00:08:09.792 00:08:09.792 Commands Supported and Effects 00:08:09.792 ============================== 00:08:09.792 Admin Commands 00:08:09.792 -------------- 00:08:09.792 Delete I/O Submission Queue (00h): Supported 00:08:09.792 Create I/O Submission Queue (01h): Supported 00:08:09.792 Get Log Page (02h): Supported 00:08:09.792 Delete I/O Completion Queue (04h): Supported 00:08:09.792 Create I/O Completion Queue (05h): Supported 00:08:09.792 Identify (06h): Supported 00:08:09.792 Abort (08h): Supported 00:08:09.792 Set Features (09h): Supported 00:08:09.792 Get Features (0Ah): Supported 00:08:09.792 Asynchronous Event Request (0Ch): Supported 00:08:09.792 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:09.792 Directive Send (19h): Supported 00:08:09.792 Directive Receive (1Ah): Supported 00:08:09.792 Virtualization Management (1Ch): Supported 00:08:09.792 Doorbell Buffer Config (7Ch): Supported 00:08:09.792 Format NVM (80h): Supported LBA-Change 00:08:09.792 I/O Commands 00:08:09.792 ------------ 00:08:09.792 Flush (00h): Supported LBA-Change 00:08:09.792 Write (01h): Supported LBA-Change 00:08:09.792 Read (02h): Supported 00:08:09.792 Compare (05h): Supported 00:08:09.792 Write Zeroes (08h): Supported LBA-Change 00:08:09.792 Dataset Management (09h): Supported LBA-Change 00:08:09.792 Unknown (0Ch): Supported 00:08:09.792 Unknown (12h): Supported 00:08:09.792 Copy (19h): Supported LBA-Change 00:08:09.792 Unknown (1Dh): Supported LBA-Change 00:08:09.792 00:08:09.792 Error Log 00:08:09.792 ========= 00:08:09.792 00:08:09.792 Arbitration 00:08:09.792 =========== 00:08:09.792 Arbitration Burst: no limit 00:08:09.792 00:08:09.792 Power Management 00:08:09.792 ================ 00:08:09.792 Number of Power States: 1 00:08:09.792 Current Power State: Power State #0 00:08:09.792 Power State #0: 00:08:09.792 Max Power: 25.00 W 00:08:09.792 Non-Operational State: Operational 00:08:09.792 Entry Latency: 16 microseconds 00:08:09.792 Exit Latency: 4 microseconds 00:08:09.793 Relative Read Throughput: 0 00:08:09.793 Relative Read Latency: 0 00:08:09.793 Relative Write Throughput: 0 00:08:09.793 Relative Write Latency: 0 00:08:09.793 Idle Power: Not Reported 00:08:09.793 Active Power: Not Reported 00:08:09.793 Non-Operational Permissive Mode: Not Supported 00:08:09.793 00:08:09.793 Health Information 00:08:09.793 ================== 00:08:09.793 Critical Warnings: 00:08:09.793 Available Spare Space: OK 00:08:09.793 Temperature: OK 00:08:09.793 Device Reliability: OK 00:08:09.793 Read Only: No 00:08:09.793 Volatile Memory Backup: OK 00:08:09.793 Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.793 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:09.793 Available Spare: 0% 00:08:09.793 Available Spare Threshold: 0% 00:08:09.793 Life Percentage Used: 0% 00:08:09.793 Data Units Read: 875 00:08:09.793 Data Units Written: 804 00:08:09.793 Host Read Commands: 40576 00:08:09.793 Host Write Commands: 39999 00:08:09.793 Controller Busy Time: 0 minutes 00:08:09.793 Power Cycles: 0 00:08:09.793 Power On Hours: 0 hours 00:08:09.793 Unsafe Shutdowns: 0 00:08:09.793 Unrecoverable Media Errors: 0 00:08:09.793 Lifetime Error Log Entries: 0 00:08:09.793 Warning Temperature Time: 0 minutes 00:08:09.793 Critical Temperature Time: 0 minutes 00:08:09.793 00:08:09.793 Number of Queues 00:08:09.793 ================ 00:08:09.793 Number of I/O Submission Queues: 64 00:08:09.793 Number of I/O Completion Queues: 64 00:08:09.793 00:08:09.793 ZNS Specific Controller Data 00:08:09.793 ============================ 00:08:09.793 Zone Append Size Limit: 0 00:08:09.793 00:08:09.793 00:08:09.793 Active Namespaces 00:08:09.793 ================= 00:08:09.793 Namespace ID:1 00:08:09.793 Error Recovery Timeout: Unlimited 00:08:09.793 Command Set Identifier: NVM (00h) 00:08:09.793 Deallocate: Supported 00:08:09.793 Deallocated/Unwritten Error: Supported 00:08:09.793 Deallocated Read Value: All 0x00 00:08:09.793 Deallocate in Write Zeroes: Not Supported 00:08:09.793 Deallocated Guard Field: 0xFFFF 00:08:09.793 Flush: Supported 00:08:09.793 Reservation: Not Supported 00:08:09.793 Namespace Sharing Capabilities: Multiple Controllers 00:08:09.793 Size (in LBAs): 262144 (1GiB) 00:08:09.793 Capacity (in LBAs): 262144 (1GiB) 00:08:09.793 Utilization (in LBAs): 262144 (1GiB) 00:08:09.793 Thin Provisioning: Not Supported 00:08:09.793 Per-NS Atomic Units: No 00:08:09.793 Maximum Single Source Range Length: 128 00:08:09.793 Maximum Copy Length: 128 00:08:09.793 Maximum Source Range Count: 128 00:08:09.793 NGUID/EUI64 Never Reused: No 00:08:09.793 Namespace Write Protected: No 00:08:09.793 Endurance group ID: 1 00:08:09.793 Number of LBA Formats: 8 00:08:09.793 Current LBA Format: LBA Format #04 00:08:09.793 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.793 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.793 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.793 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.793 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.793 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.793 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.793 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.793 00:08:09.793 Get Feature FDP: 00:08:09.793 ================ 00:08:09.793 Enabled: Yes 00:08:09.793 FDP configuration index: 0 00:08:09.793 00:08:09.793 FDP configurations log page 00:08:09.793 =========================== 00:08:09.793 Number of FDP configurations: 1 00:08:09.793 Version: 0 00:08:09.793 Size: 112 00:08:09.793 FDP Configuration Descriptor: 0 00:08:09.793 Descriptor Size: 96 00:08:09.793 Reclaim Group Identifier format: 2 00:08:09.793 FDP Volatile Write Cache: Not Present 00:08:09.793 FDP Configuration: Valid 00:08:09.793 Vendor Specific Size: 0 00:08:09.793 Number of Reclaim Groups: 2 00:08:09.793 Number of Recalim Unit Handles: 8 00:08:09.793 Max Placement Identifiers: 128 00:08:09.793 Number of Namespaces Suppprted: 256 00:08:09.793 Reclaim unit Nominal Size: 6000000 bytes 00:08:09.793 Estimated Reclaim Unit Time Limit: Not Reported 00:08:09.793 RUH Desc #000: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #001: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #002: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #003: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #004: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #005: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #006: RUH Type: Initially Isolated 00:08:09.793 RUH Desc #007: RUH Type: Initially Isolated 00:08:09.793 00:08:09.793 FDP reclaim unit handle usage log page 00:08:09.793 ====================================== 00:08:09.793 Number of Reclaim Unit Handles: 8 00:08:09.793 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:09.793 RUH Usage Desc #001: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #002: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #003: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #004: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #005: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #006: RUH Attributes: Unused 00:08:09.793 RUH Usage Desc #007: RUH Attributes: Unused 00:08:09.793 00:08:09.793 FDP statistics log page 00:08:09.793 ======================= 00:08:09.793 Host bytes with metadata written: 517513216 00:08:09.793 Media bytes with metadata written: 517570560 00:08:09.793 Media bytes erased: 0 00:08:09.793 00:08:09.793 FDP events log page 00:08:09.793 =================== 00:08:09.793 Number of FDP events: 0 00:08:09.793 00:08:09.793 NVM Specific Namespace Data 00:08:09.793 =========================== 00:08:09.793 Logical Block Storage Tag Mask: 0 00:08:09.793 Protection Information Capabilities: 00:08:09.793 16b Guard Protection Information Storage Tag Support: No 00:08:09.793 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.793 Storage Tag Check Read Support: No 00:08:09.793 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.793 ===================================================== 00:08:09.793 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:09.793 ===================================================== 00:08:09.793 Controller Capabilities/Features 00:08:09.793 ================================ 00:08:09.793 Vendor ID: 1b36 00:08:09.793 Subsystem Vendor ID: 1af4 00:08:09.793 Serial Number: 12340 00:08:09.793 Model Number: QEMU NVMe Ctrl 00:08:09.793 Firmware Version: 8.0.0 00:08:09.793 Recommended Arb Burst: 6 00:08:09.793 IEEE OUI Identifier: 00 54 52 00:08:09.793 Multi-path I/O 00:08:09.793 May have multiple subsystem ports: No 00:08:09.793 May have multiple controllers: No 00:08:09.793 Associated with SR-IOV VF: No 00:08:09.793 Max Data Transfer Size: 524288 00:08:09.793 Max Number of Namespaces: 256 00:08:09.793 Max Number of I/O Queues: 64 00:08:09.793 NVMe Specification Version (VS): 1.4 00:08:09.793 NVMe Specification Version (Identify): 1.4 00:08:09.793 Maximum Queue Entries: 2048 00:08:09.793 Contiguous Queues Required: Yes 00:08:09.793 Arbitration Mechanisms Supported 00:08:09.793 Weighted Round Robin: Not Supported 00:08:09.793 Vendor Specific: Not Supported 00:08:09.793 Reset Timeout: 7500 ms 00:08:09.793 Doorbell Stride: 4 bytes 00:08:09.793 NVM Subsystem Reset: Not Supported 00:08:09.793 Command Sets Supported 00:08:09.793 NVM Command Set: Supported 00:08:09.793 Boot Partition: Not Supported 00:08:09.793 Memory Page Size Minimum: 4096 bytes 00:08:09.793 Memory Page Size Maximum: 65536 bytes 00:08:09.793 Persistent Memory Region: Not Supported 00:08:09.793 Optional Asynchronous Events Supported 00:08:09.793 Namespace Attribute Notices: Supported 00:08:09.793 Firmware Activation Notices: Not Supported 00:08:09.793 ANA Change Notices: Not Supported 00:08:09.793 PLE Aggregate Log Change Notices: Not Supported 00:08:09.793 LBA Status Info Alert Notices: Not Supported 00:08:09.793 EGE Aggregate Log Change Notices: Not Supported 00:08:09.793 Normal NVM Subsystem Shutdown event: Not Supported 00:08:09.793 Zone Descriptor Change Notices: Not Supported 00:08:09.793 Discovery Log Change Notices: Not Supported 00:08:09.793 Controller Attributes 00:08:09.793 128-bit Host Identifier: Not Supported 00:08:09.793 Non-Operational Permissive Mode: Not Supported 00:08:09.793 NVM Sets: Not Supported 00:08:09.794 Read Recovery Levels: Not Supported 00:08:09.794 Endurance Groups: Not Supported 00:08:09.794 Predictable Latency Mode: Not Supported 00:08:09.794 Traffic Based Keep ALive: Not Supported 00:08:09.794 Namespace Granularity: Not Supported 00:08:09.794 SQ Associations: Not Supported 00:08:09.794 UUID List: Not Supported 00:08:09.794 Multi-Domain Subsystem: Not Supported 00:08:09.794 Fixed Capacity Management: Not Supported 00:08:09.794 Variable Capacity Management: Not Supported 00:08:09.794 Delete Endurance Group: Not Supported 00:08:09.794 Delete NVM Set: Not Supported 00:08:09.794 Extended LBA Formats Supported: Supported 00:08:09.794 Flexible Data Placement Supported: Not Supported 00:08:09.794 00:08:09.794 Controller Memory Buffer Support 00:08:09.794 ================================ 00:08:09.794 Supported: No 00:08:09.794 00:08:09.794 Persistent Memory Region Support 00:08:09.794 ================================ 00:08:09.794 Supported: No 00:08:09.794 00:08:09.794 Admin Command Set Attributes 00:08:09.794 ============================ 00:08:09.794 Security Send/Receive: Not Supported 00:08:09.794 Format NVM: Supported 00:08:09.794 Firmware Activate/Download: Not Supported 00:08:09.794 Namespace Management: Supported 00:08:09.794 Device Self-Test: Not Supported 00:08:09.794 Directives: Supported 00:08:09.794 NVMe-MI: Not Supported 00:08:09.794 Virtualization Management: Not Supported 00:08:09.794 Doorbell Buffer Config: Supported 00:08:09.794 Get LBA Status Capability: Not Supported 00:08:09.794 Command & Feature Lockdown Capability: Not Supported 00:08:09.794 Abort Command Limit: 4 00:08:09.794 Async Event Request Limit: 4 00:08:09.794 Number of Firmware Slots: N/A 00:08:09.794 Firmware Slot 1 Read-Only: N/A 00:08:09.794 Firmware Activation Without Reset: N/A 00:08:09.794 Multiple Update Detection Support: N/A 00:08:09.794 Firmware Update Granularity: No Information Provided 00:08:09.794 Per-Namespace SMART Log: Yes 00:08:09.794 Asymmetric Namespace Access Log Page: Not Supported 00:08:09.794 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:09.794 Command Effects Log Page: Supported 00:08:09.794 Get Log Page Extended Data: Supported 00:08:09.794 Telemetry Log Pages: Not Supported 00:08:09.794 Persistent Event Log Pages: Not Supported 00:08:09.794 Supported Log Pages Log Page: May Support 00:08:09.794 Commands Supported & Effects Log Page: Not Supported 00:08:09.794 Feature Identifiers & Effects Log Page:May Support 00:08:09.794 NVMe-MI Commands & Effects Log Page: May Support 00:08:09.794 Data Area 4 for Telemetry Log: Not Supported 00:08:09.794 Error Log Page Entries Supported: 1 00:08:09.794 Keep Alive: Not Supported 00:08:09.794 00:08:09.794 NVM Command Set Attributes 00:08:09.794 ========================== 00:08:09.794 Submission Queue Entry Size 00:08:09.794 Max: 64 00:08:09.794 Min: 64 00:08:09.794 Completion Queue Entry Size 00:08:09.794 Max: 16 00:08:09.794 Min: 16 00:08:09.794 Number of Namespaces: 256 00:08:09.794 Compare Command: Supported 00:08:09.794 Write Uncorrectable Command: Not Supported 00:08:09.794 Dataset Management Command: Supported 00:08:09.794 Write Zeroes Command: Supported 00:08:09.794 Set Features Save Field: Supported 00:08:09.794 Reservations: Not Supported 00:08:09.794 Timestamp: Supported 00:08:09.794 Copy: Supported 00:08:09.794 Volatile Write Cache: Present 00:08:09.794 Atomic Write Unit (Normal): 1 00:08:09.794 Atomic Write Unit (PFail): 1 00:08:09.794 Atomic Compare & Write Unit: 1 00:08:09.794 Fused Compare & Write: Not Supported 00:08:09.794 Scatter-Gather List 00:08:09.794 SGL Command Set: Supported 00:08:09.794 SGL Keyed: Not Supported 00:08:09.794 SGL Bit Bucket Descriptor: Not Supported 00:08:09.794 SGL Metadata Pointer: Not Supported 00:08:09.794 Oversized SGL: Not Supported 00:08:09.794 SGL Metadata Address: Not Supported 00:08:09.794 SGL Offset: Not Supported 00:08:09.794 Transport SGL Data Block: Not Supported 00:08:09.794 Replay Protected Memory Block: Not Supported 00:08:09.794 00:08:09.794 Firmware Slot Information 00:08:09.794 ========================= 00:08:09.794 Active slot: 1 00:08:09.794 Slot 1 Firmware Revision: 1.0 00:08:09.794 00:08:09.794 00:08:09.794 Commands Supported and Effects 00:08:09.794 ============================== 00:08:09.794 Admin Commands 00:08:09.794 -------------- 00:08:09.794 Delete I/O Submission Queue (00h): Supported 00:08:09.794 Create I/O Submission Queue (01h): Supported 00:08:09.794 Get Log Page (02h): Supported 00:08:09.794 Delete I/O Completion Queue (04h): Supported 00:08:09.794 Create I/O Completion Queue (05h): Supported 00:08:09.794 Identify (06h): Supported 00:08:09.794 Abort (08h): Supported 00:08:09.794 Set Features (09h): Supported 00:08:09.794 Get Features (0Ah): Supported 00:08:09.794 Asynchronous Event Request (0Ch): Supported 00:08:09.794 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:09.794 Directive Send (19h): Supported 00:08:09.794 Directive Receive (1Ah): Supported 00:08:09.794 Virtualization Management (1Ch): Supported 00:08:09.794 Doorbell Buffer Config (7Ch): Supported 00:08:09.794 Format NVM (80h): Supported LBA-Change 00:08:09.794 I/O Commands 00:08:09.794 ------------ 00:08:09.794 Flush (00h): Supported LBA-Change 00:08:09.794 Write (01h): Supported LBA-Change 00:08:09.794 Read (02h): Supported 00:08:09.794 Compare (05h): Supported 00:08:09.794 Write Zeroes (08h): Supported LBA-Change 00:08:09.794 Dataset Management (09h): Supported LBA-Change 00:08:09.794 Unknown (0Ch): Supported 00:08:09.794 Unknown (12h): Supported 00:08:09.794 Copy (19h): Supported LBA-Change 00:08:09.794 Unknown (1Dh): Supported LBA-Change 00:08:09.794 00:08:09.794 Error Log 00:08:09.794 ========= 00:08:09.794 00:08:09.794 Arbitration 00:08:09.794 =========== 00:08:09.794 Arbitration Burst: no limit 00:08:09.794 00:08:09.794 Power Management 00:08:09.794 ================ 00:08:09.794 Number of Power States: 1 00:08:09.794 Current Power State: Power State #0 00:08:09.794 Power State #0: 00:08:09.794 Max Power: 25.00 W 00:08:09.794 Non-Operational State: Operational 00:08:09.794 Entry Latency: 16 microseconds 00:08:09.794 Exit Latency: 4 microseconds 00:08:09.794 Relative Read Throughput: 0 00:08:09.794 Relative Read Latency: 0 00:08:09.794 Relative Write Throughput: 0 00:08:09.794 Relative Write Latency: 0 00:08:09.794 Idle Power: Not Reported 00:08:09.794 Active Power: Not Reported 00:08:09.794 Non-Operational Permissive Mode: Not Supported 00:08:09.794 00:08:09.794 Health Information 00:08:09.794 ================== 00:08:09.794 Critical Warnings: 00:08:09.794 Available Spare Space: OK 00:08:09.794 Temperature: OK 00:08:09.794 Device Reliability: OK 00:08:09.794 Read Only: No 00:08:09.794 Volatile Memory Backup: OK 00:08:09.794 Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.794 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:09.794 Available Spare: 0% 00:08:09.794 Available Spare Threshold: 0% 00:08:09.794 Life Percentage Used: 0% 00:08:09.794 Data Units Read: 691 00:08:09.794 Data Units Written: 619 00:08:09.794 Host Read Commands: 38654 00:08:09.794 Host Write Commands: 38440 00:08:09.794 Controller Busy Time: 0 minutes 00:08:09.794 Power Cycles: 0 00:08:09.794 Power On Hours: 0 hours 00:08:09.794 Unsafe Shutdowns: 0 00:08:09.794 Unrecoverable Media Errors: 0 00:08:09.794 Lifetime Error Log Entries: 0 00:08:09.794 Warning Temperature Time: 0 minutes 00:08:09.794 Critical Temperature Time: 0 minutes 00:08:09.794 00:08:09.794 Number of Queues 00:08:09.794 ================ 00:08:09.794 Number of I/O Submission Queues: 64 00:08:09.794 Number of I/O Completion Queues: 64 00:08:09.794 00:08:09.794 ZNS Specific Controller Data 00:08:09.794 ============================ 00:08:09.794 Zone Append Size Limit: 0 00:08:09.794 00:08:09.794 00:08:09.794 Active Namespaces 00:08:09.794 ================= 00:08:09.794 Namespace ID:1 00:08:09.794 Error Recovery Timeout: Unlimited 00:08:09.794 Command Set Identifier: NVM (00h) 00:08:09.794 Deallocate: Supported 00:08:09.794 Deallocated/Unwritten Error: Supported 00:08:09.794 Deallocated Read Value: All 0x00 00:08:09.794 Deallocate in Write Zeroes: Not Supported 00:08:09.794 Deallocated Guard Field: 0xFFFF 00:08:09.794 Flush: Supported 00:08:09.794 Reservation: Not Supported 00:08:09.794 Metadata Transferred as: Separate Metadata Buffer 00:08:09.794 Namespace Sharing Capabilities: Private 00:08:09.794 Size (in LBAs): 1548666 (5GiB) 00:08:09.794 Capacity (in LBAs): 1548666 (5GiB) 00:08:09.794 Utilization (in LBAs): 1548666 (5GiB) 00:08:09.794 Thin Provisioning: Not Supported 00:08:09.794 Per-NS Atomic Units: No 00:08:09.794 Maximum Single Source Range Length: 128 00:08:09.794 Maximum Copy Length: 128 00:08:09.794 Maximum Source Range Count: 128 00:08:09.795 NGUID/EUI64 Never Reused: No 00:08:09.795 Namespace Write Protected: No 00:08:09.795 Number of LBA Formats: 8 00:08:09.795 Current LBA Format: LBA Format #07 00:08:09.795 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.795 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.795 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.795 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.795 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.795 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.795 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.795 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.795 00:08:09.795 NVM Specific Namespace Data 00:08:09.795 =========================== 00:08:09.795 Logical Block Storage Tag Mask: 0 00:08:09.795 Protection Information Capabilities: 00:08:09.795 16b Guard Protection Information Storage Tag Support: No 00:08:09.795 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.795 Storage Tag Check Read Support: No 00:08:09.795 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.795 ===================================================== 00:08:09.795 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:09.795 ===================================================== 00:08:09.795 Controller Capabilities/Features 00:08:09.795 ================================ 00:08:09.795 Vendor ID: 1b36 00:08:09.795 Subsystem Vendor ID: 1af4 00:08:09.795 Serial Number: 12341 00:08:09.795 Model Number: QEMU NVMe Ctrl 00:08:09.795 Firmware Version: 8.0.0 00:08:09.795 Recommended Arb Burst: 6 00:08:09.795 IEEE OUI Identifier: 00 54 52 00:08:09.795 Multi-path I/O 00:08:09.795 May have multiple subsystem ports: No 00:08:09.795 May have multiple controllers: No 00:08:09.795 Associated with SR-IOV VF: No 00:08:09.795 Max Data Transfer Size: 524288 00:08:09.795 Max Number of Namespaces: 256 00:08:09.795 Max Number of I/O Queues: 64 00:08:09.795 NVMe Specification Version (VS): 1.4 00:08:09.795 NVMe Specification Version (Identify): 1.4 00:08:09.795 Maximum Queue Entries: 2048 00:08:09.795 Contiguous Queues Required: Yes 00:08:09.795 Arbitration Mechanisms Supported 00:08:09.795 Weighted Round Robin: Not Supported 00:08:09.795 Vendor Specific: Not Supported 00:08:09.795 Reset Timeout: 7500 ms 00:08:09.795 Doorbell Stride: 4 bytes 00:08:09.795 NVM Subsystem Reset: Not Supported 00:08:09.795 Command Sets Supported 00:08:09.795 NVM Command Set: Supported 00:08:09.795 Boot Partition: Not Supported 00:08:09.795 Memory Page Size Minimum: 4096 bytes 00:08:09.795 Memory Page Size Maximum: 65536 bytes 00:08:09.795 Persistent Memory Region: Not Supported 00:08:09.795 Optional Asynchronous Events Supported 00:08:09.795 Namespace Attribute Notices: Supported 00:08:09.795 Firmware Activation Notices: Not Supported 00:08:09.795 ANA Change Notices: Not Supported 00:08:09.795 PLE Aggregate Log Change Notices: Not Supported 00:08:09.795 LBA Status Info Alert Notices: Not Supported 00:08:09.795 EGE Aggregate Log Change Notices: Not Supported 00:08:09.795 Normal NVM Subsystem Shutdown event: Not Supported 00:08:09.795 Zone Descriptor Change Notices: Not Supported 00:08:09.795 Discovery Log Change Notices: Not Supported 00:08:09.795 Controller Attributes 00:08:09.795 128-bit Host Identifier: Not Supported 00:08:09.795 Non-Operational Permissive Mode: Not Supported 00:08:09.795 NVM Sets: Not Supported 00:08:09.795 Read Recovery Levels: Not Supported 00:08:09.795 Endurance Groups: Not Supported 00:08:09.795 Predictable Latency Mode: Not Supported 00:08:09.795 Traffic Based Keep ALive: Not Supported 00:08:09.795 Namespace Granularity: Not Supported 00:08:09.795 SQ Associations: Not Supported 00:08:09.795 UUID List: Not Supported 00:08:09.795 Multi-Domain Subsystem: Not Supported 00:08:09.795 Fixed Capacity Management: Not Supported 00:08:09.795 Variable Capacity Management: Not Supported 00:08:09.795 Delete Endurance Group: Not Supported 00:08:09.795 Delete NVM Set: Not Supported 00:08:09.795 Extended LBA Formats Supported: Supported 00:08:09.795 Flexible Data Placement Supported: Not Supported 00:08:09.795 00:08:09.795 Controller Memory Buffer Support 00:08:09.795 ================================ 00:08:09.795 Supported: No 00:08:09.795 00:08:09.795 Persistent Memory Region Support 00:08:09.795 ================================ 00:08:09.795 Supported: No 00:08:09.795 00:08:09.795 Admin Command Set Attributes 00:08:09.795 ============================ 00:08:09.795 Security Send/Receive: Not Supported 00:08:09.795 Format NVM: Supported 00:08:09.795 Firmware Activate/Download: Not Supported 00:08:09.795 Namespace Management: Supported 00:08:09.795 Device Self-Test: Not Supported 00:08:09.795 Directives: Supported 00:08:09.795 NVMe-MI: Not Supported 00:08:09.795 Virtualization Management: Not Supported 00:08:09.795 Doorbell Buffer Config: Supported 00:08:09.795 Get LBA Status Capability: Not Supported 00:08:09.795 Command & Feature Lockdown Capability: Not Supported 00:08:09.795 Abort Command Limit: 4 00:08:09.795 Async Event Request Limit: 4 00:08:09.795 Number of Firmware Slots: N/A 00:08:09.795 Firmware Slot 1 Read-Only: N/A 00:08:09.795 Firmware Activation Without Reset: N/A 00:08:09.795 Multiple Update Detection Support: N/A 00:08:09.795 Firmware Update Granularity: No Information Provided 00:08:09.795 Per-Namespace SMART Log: Yes 00:08:09.795 Asymmetric Namespace Access Log Page: Not Supported 00:08:09.795 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:09.795 Command Effects Log Page: Supported 00:08:09.795 Get Log Page Extended Data: Supported 00:08:09.795 Telemetry Log Pages: Not Supported 00:08:09.795 Persistent Event Log Pages: Not Supported 00:08:09.795 Supported Log Pages Log Page: May Support 00:08:09.795 Commands Supported & Effects Log Page: Not Supported 00:08:09.795 Feature Identifiers & Effects Log Page:May Support 00:08:09.795 NVMe-MI Commands & Effects Log Page: May Support 00:08:09.795 Data Area 4 for Telemetry Log: Not Supported 00:08:09.795 Error Log Page Entries Supported: 1 00:08:09.795 Keep Alive: Not Supported 00:08:09.795 00:08:09.795 NVM Command Set Attributes 00:08:09.795 ========================== 00:08:09.795 Submission Queue Entry Size 00:08:09.795 Max: 64 00:08:09.795 Min: 64 00:08:09.795 Completion Queue Entry Size 00:08:09.795 Max: 16 00:08:09.795 Min: 16 00:08:09.795 Number of Namespaces: 256 00:08:09.795 Compare Command: Supported 00:08:09.795 Write Uncorrectable Command: Not Supported 00:08:09.795 Dataset Management Command: Supported 00:08:09.795 Write Zeroes Command: Supported 00:08:09.795 Set Features Save Field: Supported 00:08:09.795 Reservations: Not Supported 00:08:09.795 Timestamp: Supported 00:08:09.795 Copy: Supported 00:08:09.795 Volatile Write Cache: Present 00:08:09.795 Atomic Write Unit (Normal): 1 00:08:09.795 Atomic Write Unit (PFail): 1 00:08:09.795 Atomic Compare & Write Unit: 1 00:08:09.795 Fused Compare & Write: Not Supported 00:08:09.795 Scatter-Gather List 00:08:09.795 SGL Command Set: Supported 00:08:09.795 SGL Keyed: Not Supported 00:08:09.795 SGL Bit Bucket Descriptor: Not Supported 00:08:09.795 SGL Metadata Pointer: Not Supported 00:08:09.795 Oversized SGL: Not Supported 00:08:09.795 SGL Metadata Address: Not Supported 00:08:09.795 SGL Offset: Not Supported 00:08:09.795 Transport SGL Data Block: Not Supported 00:08:09.795 Replay Protected Memory Block: Not Supported 00:08:09.795 00:08:09.795 Firmware Slot Information 00:08:09.795 ========================= 00:08:09.795 Active slot: 1 00:08:09.795 Slot 1 Firmware Revision: 1.0 00:08:09.795 00:08:09.795 00:08:09.795 Commands Supported and Effects 00:08:09.795 ============================== 00:08:09.795 Admin Commands 00:08:09.795 -------------- 00:08:09.795 Delete I/O Submission Queue (00h): Supported 00:08:09.795 Create I/O Submission Queue (01h): Supported 00:08:09.795 Get Log Page (02h): Supported 00:08:09.795 Delete I/O Completion Queue (04h): Supported 00:08:09.795 Create I/O Completion Queue (05h): Supported 00:08:09.795 Identify (06h): Supported 00:08:09.795 Abort (08h): Supported 00:08:09.795 Set Features (09h): Supported 00:08:09.795 Get Features (0Ah): Supported 00:08:09.795 Asynchronous Event Request (0Ch): Supported 00:08:09.795 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:09.795 Directive Send (19h): Supported 00:08:09.795 Directive Receive (1Ah): Supported 00:08:09.795 Virtualization Management (1Ch): Supported 00:08:09.795 Doorbell Buffer Config (7Ch): Supported 00:08:09.795 Format NVM (80h): Supported LBA-Change 00:08:09.795 I/O Commands 00:08:09.795 ------------ 00:08:09.796 Flush (00h): Supported LBA-Change 00:08:09.796 Write (01h): Supported LBA-Change 00:08:09.796 Read (02h): Supported 00:08:09.796 Compare (05h): Supported 00:08:09.796 Write Zeroes (08h): Supported LBA-Change 00:08:09.796 Dataset Management (09h): Supported LBA-Change 00:08:09.796 Unknown (0Ch): Supported 00:08:09.796 Unknown (12h): Supported 00:08:09.796 Copy (19h): Supported LBA-Change 00:08:09.796 Unknown (1Dh): Supported LBA-Change 00:08:09.796 00:08:09.796 Error Log 00:08:09.796 ========= 00:08:09.796 00:08:09.796 Arbitration 00:08:09.796 =========== 00:08:09.796 Arbitration Burst: no limit 00:08:09.796 00:08:09.796 Power Management 00:08:09.796 ================ 00:08:09.796 Number of Power States: 1 00:08:09.796 Current Power State: Power State #0 00:08:09.796 Power State #0: 00:08:09.796 Max Power: 25.00 W 00:08:09.796 Non-Operational State: Operational 00:08:09.796 Entry Latency: 16 microseconds 00:08:09.796 Exit Latency: 4 microseconds 00:08:09.796 Relative Read Throughput: 0 00:08:09.796 Relative Read Latency: 0 00:08:09.796 Relative Write Throughput: 0 00:08:09.796 Relative Write Latency: 0 00:08:09.796 Idle Power: Not Reported 00:08:09.796 Active Power: Not Reported 00:08:09.796 Non-Operational Permissive Mode: Not Supported 00:08:09.796 00:08:09.796 Health Information 00:08:09.796 ================== 00:08:09.796 Critical Warnings: 00:08:09.796 Available Spare Space: OK 00:08:09.796 Temperature: OK 00:08:09.796 Device Reliability: OK 00:08:09.796 Read Only: No 00:08:09.796 Volatile Memory Backup: OK 00:08:09.796 Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.796 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:09.796 Available Spare: 0% 00:08:09.796 Available Spare Threshold: 0% 00:08:09.796 Life Percentage Used: 0% 00:08:09.796 Data Units Read: 1067 00:08:09.796 Data Units Written: 935 00:08:09.796 Host Read Commands: 56655 00:08:09.796 Host Write Commands: 55433 00:08:09.796 Controller Busy Time: 0 minutes 00:08:09.796 Power Cycles: 0 00:08:09.796 Power On Hours: 0 hours 00:08:09.796 Unsafe Shutdowns: 0 00:08:09.796 Unrecoverable Media Errors: 0 00:08:09.796 Lifetime Error Log Entries: 0 00:08:09.796 Warning Temperature Time: 0 minutes 00:08:09.796 Critical Temperature Time: 0 minutes 00:08:09.796 00:08:09.796 Number of Queues 00:08:09.796 ================ 00:08:09.796 Number of I/O Submission Queues: 64 00:08:09.796 Number of I/O Completion Queues: 64 00:08:09.796 00:08:09.796 ZNS Specific Controller Data 00:08:09.796 ============================ 00:08:09.796 Zone Append Size Limit: 0 00:08:09.796 00:08:09.796 00:08:09.796 Active Namespaces 00:08:09.796 ================= 00:08:09.796 Namespace ID:1 00:08:09.796 Error Recovery Timeout: Unlimited 00:08:09.796 Command Set Identifier: NVM (00h) 00:08:09.796 Deallocate: Supported 00:08:09.796 Deallocated/Unwritten Error: Supported 00:08:09.796 Deallocated Read Value: All 0x00 00:08:09.796 Deallocate in Write Zeroes: Not Supported 00:08:09.796 Deallocated Guard Field: 0xFFFF 00:08:09.796 Flush: Supported 00:08:09.796 Reservation: Not Supported 00:08:09.796 Namespace Sharing Capabilities: Private 00:08:09.796 Size (in LBAs): 1310720 (5GiB) 00:08:09.796 Capacity (in LBAs): 1310720 (5GiB) 00:08:09.796 Utilization (in LBAs): 1310720 (5GiB) 00:08:09.796 Thin Provisioning: Not Supported 00:08:09.796 Per-NS Atomic Units: No 00:08:09.796 Maximum Single Source Range Length: 128 00:08:09.796 Maximum Copy Length: 128 00:08:09.796 Maximum Source Range Count: 128 00:08:09.796 NGUID/EUI64 Never Reused: No 00:08:09.796 Namespace Write Protected: No 00:08:09.796 Number of LBA Formats: 8 00:08:09.796 Current LBA Format: LBA Format #04 00:08:09.796 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.796 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.796 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.796 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.796 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.796 LBA Formated unexpected 00:08:09.796 [2024-11-05 11:24:08.994147] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 62766 terminated unexpected 00:08:09.796 [2024-11-05 11:24:08.994919] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 62766 terminated unexpected 00:08:09.796 [2024-11-05 11:24:08.995495] nvme_ctrlr.c:3605:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 62766 terminated unexpected 00:08:09.796 t #05: Data Size: 4096 Metadata Size: 8 00:08:09.796 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.796 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.796 00:08:09.796 NVM Specific Namespace Data 00:08:09.796 =========================== 00:08:09.796 Logical Block Storage Tag Mask: 0 00:08:09.796 Protection Information Capabilities: 00:08:09.796 16b Guard Protection Information Storage Tag Support: No 00:08:09.796 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.796 Storage Tag Check Read Support: No 00:08:09.796 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.796 ===================================================== 00:08:09.796 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:09.796 ===================================================== 00:08:09.796 Controller Capabilities/Features 00:08:09.796 ================================ 00:08:09.796 Vendor ID: 1b36 00:08:09.796 Subsystem Vendor ID: 1af4 00:08:09.796 Serial Number: 12342 00:08:09.796 Model Number: QEMU NVMe Ctrl 00:08:09.796 Firmware Version: 8.0.0 00:08:09.796 Recommended Arb Burst: 6 00:08:09.796 IEEE OUI Identifier: 00 54 52 00:08:09.796 Multi-path I/O 00:08:09.796 May have multiple subsystem ports: No 00:08:09.796 May have multiple controllers: No 00:08:09.796 Associated with SR-IOV VF: No 00:08:09.796 Max Data Transfer Size: 524288 00:08:09.796 Max Number of Namespaces: 256 00:08:09.796 Max Number of I/O Queues: 64 00:08:09.796 NVMe Specification Version (VS): 1.4 00:08:09.796 NVMe Specification Version (Identify): 1.4 00:08:09.796 Maximum Queue Entries: 2048 00:08:09.796 Contiguous Queues Required: Yes 00:08:09.796 Arbitration Mechanisms Supported 00:08:09.796 Weighted Round Robin: Not Supported 00:08:09.796 Vendor Specific: Not Supported 00:08:09.796 Reset Timeout: 7500 ms 00:08:09.796 Doorbell Stride: 4 bytes 00:08:09.796 NVM Subsystem Reset: Not Supported 00:08:09.796 Command Sets Supported 00:08:09.796 NVM Command Set: Supported 00:08:09.796 Boot Partition: Not Supported 00:08:09.796 Memory Page Size Minimum: 4096 bytes 00:08:09.796 Memory Page Size Maximum: 65536 bytes 00:08:09.796 Persistent Memory Region: Not Supported 00:08:09.796 Optional Asynchronous Events Supported 00:08:09.796 Namespace Attribute Notices: Supported 00:08:09.796 Firmware Activation Notices: Not Supported 00:08:09.796 ANA Change Notices: Not Supported 00:08:09.796 PLE Aggregate Log Change Notices: Not Supported 00:08:09.796 LBA Status Info Alert Notices: Not Supported 00:08:09.796 EGE Aggregate Log Change Notices: Not Supported 00:08:09.796 Normal NVM Subsystem Shutdown event: Not Supported 00:08:09.796 Zone Descriptor Change Notices: Not Supported 00:08:09.796 Discovery Log Change Notices: Not Supported 00:08:09.796 Controller Attributes 00:08:09.797 128-bit Host Identifier: Not Supported 00:08:09.797 Non-Operational Permissive Mode: Not Supported 00:08:09.797 NVM Sets: Not Supported 00:08:09.797 Read Recovery Levels: Not Supported 00:08:09.797 Endurance Groups: Not Supported 00:08:09.797 Predictable Latency Mode: Not Supported 00:08:09.797 Traffic Based Keep ALive: Not Supported 00:08:09.797 Namespace Granularity: Not Supported 00:08:09.797 SQ Associations: Not Supported 00:08:09.797 UUID List: Not Supported 00:08:09.797 Multi-Domain Subsystem: Not Supported 00:08:09.797 Fixed Capacity Management: Not Supported 00:08:09.797 Variable Capacity Management: Not Supported 00:08:09.797 Delete Endurance Group: Not Supported 00:08:09.797 Delete NVM Set: Not Supported 00:08:09.797 Extended LBA Formats Supported: Supported 00:08:09.797 Flexible Data Placement Supported: Not Supported 00:08:09.797 00:08:09.797 Controller Memory Buffer Support 00:08:09.797 ================================ 00:08:09.797 Supported: No 00:08:09.797 00:08:09.797 Persistent Memory Region Support 00:08:09.797 ================================ 00:08:09.797 Supported: No 00:08:09.797 00:08:09.797 Admin Command Set Attributes 00:08:09.797 ============================ 00:08:09.797 Security Send/Receive: Not Supported 00:08:09.797 Format NVM: Supported 00:08:09.797 Firmware Activate/Download: Not Supported 00:08:09.797 Namespace Management: Supported 00:08:09.797 Device Self-Test: Not Supported 00:08:09.797 Directives: Supported 00:08:09.797 NVMe-MI: Not Supported 00:08:09.797 Virtualization Management: Not Supported 00:08:09.797 Doorbell Buffer Config: Supported 00:08:09.797 Get LBA Status Capability: Not Supported 00:08:09.797 Command & Feature Lockdown Capability: Not Supported 00:08:09.797 Abort Command Limit: 4 00:08:09.797 Async Event Request Limit: 4 00:08:09.797 Number of Firmware Slots: N/A 00:08:09.797 Firmware Slot 1 Read-Only: N/A 00:08:09.797 Firmware Activation Without Reset: N/A 00:08:09.797 Multiple Update Detection Support: N/A 00:08:09.797 Firmware Update Granularity: No Information Provided 00:08:09.797 Per-Namespace SMART Log: Yes 00:08:09.797 Asymmetric Namespace Access Log Page: Not Supported 00:08:09.797 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:09.797 Command Effects Log Page: Supported 00:08:09.797 Get Log Page Extended Data: Supported 00:08:09.797 Telemetry Log Pages: Not Supported 00:08:09.797 Persistent Event Log Pages: Not Supported 00:08:09.797 Supported Log Pages Log Page: May Support 00:08:09.797 Commands Supported & Effects Log Page: Not Supported 00:08:09.797 Feature Identifiers & Effects Log Page:May Support 00:08:09.797 NVMe-MI Commands & Effects Log Page: May Support 00:08:09.797 Data Area 4 for Telemetry Log: Not Supported 00:08:09.797 Error Log Page Entries Supported: 1 00:08:09.797 Keep Alive: Not Supported 00:08:09.797 00:08:09.797 NVM Command Set Attributes 00:08:09.797 ========================== 00:08:09.797 Submission Queue Entry Size 00:08:09.797 Max: 64 00:08:09.797 Min: 64 00:08:09.797 Completion Queue Entry Size 00:08:09.797 Max: 16 00:08:09.797 Min: 16 00:08:09.797 Number of Namespaces: 256 00:08:09.797 Compare Command: Supported 00:08:09.797 Write Uncorrectable Command: Not Supported 00:08:09.797 Dataset Management Command: Supported 00:08:09.797 Write Zeroes Command: Supported 00:08:09.797 Set Features Save Field: Supported 00:08:09.797 Reservations: Not Supported 00:08:09.797 Timestamp: Supported 00:08:09.797 Copy: Supported 00:08:09.797 Volatile Write Cache: Present 00:08:09.797 Atomic Write Unit (Normal): 1 00:08:09.797 Atomic Write Unit (PFail): 1 00:08:09.797 Atomic Compare & Write Unit: 1 00:08:09.797 Fused Compare & Write: Not Supported 00:08:09.797 Scatter-Gather List 00:08:09.797 SGL Command Set: Supported 00:08:09.797 SGL Keyed: Not Supported 00:08:09.797 SGL Bit Bucket Descriptor: Not Supported 00:08:09.797 SGL Metadata Pointer: Not Supported 00:08:09.797 Oversized SGL: Not Supported 00:08:09.797 SGL Metadata Address: Not Supported 00:08:09.797 SGL Offset: Not Supported 00:08:09.797 Transport SGL Data Block: Not Supported 00:08:09.797 Replay Protected Memory Block: Not Supported 00:08:09.797 00:08:09.797 Firmware Slot Information 00:08:09.797 ========================= 00:08:09.797 Active slot: 1 00:08:09.797 Slot 1 Firmware Revision: 1.0 00:08:09.797 00:08:09.797 00:08:09.797 Commands Supported and Effects 00:08:09.797 ============================== 00:08:09.797 Admin Commands 00:08:09.797 -------------- 00:08:09.797 Delete I/O Submission Queue (00h): Supported 00:08:09.797 Create I/O Submission Queue (01h): Supported 00:08:09.797 Get Log Page (02h): Supported 00:08:09.797 Delete I/O Completion Queue (04h): Supported 00:08:09.797 Create I/O Completion Queue (05h): Supported 00:08:09.797 Identify (06h): Supported 00:08:09.797 Abort (08h): Supported 00:08:09.797 Set Features (09h): Supported 00:08:09.797 Get Features (0Ah): Supported 00:08:09.797 Asynchronous Event Request (0Ch): Supported 00:08:09.797 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:09.797 Directive Send (19h): Supported 00:08:09.797 Directive Receive (1Ah): Supported 00:08:09.797 Virtualization Management (1Ch): Supported 00:08:09.797 Doorbell Buffer Config (7Ch): Supported 00:08:09.797 Format NVM (80h): Supported LBA-Change 00:08:09.797 I/O Commands 00:08:09.797 ------------ 00:08:09.797 Flush (00h): Supported LBA-Change 00:08:09.797 Write (01h): Supported LBA-Change 00:08:09.797 Read (02h): Supported 00:08:09.797 Compare (05h): Supported 00:08:09.797 Write Zeroes (08h): Supported LBA-Change 00:08:09.797 Dataset Management (09h): Supported LBA-Change 00:08:09.797 Unknown (0Ch): Supported 00:08:09.797 Unknown (12h): Supported 00:08:09.797 Copy (19h): Supported LBA-Change 00:08:09.797 Unknown (1Dh): Supported LBA-Change 00:08:09.797 00:08:09.797 Error Log 00:08:09.797 ========= 00:08:09.797 00:08:09.797 Arbitration 00:08:09.797 =========== 00:08:09.797 Arbitration Burst: no limit 00:08:09.797 00:08:09.797 Power Management 00:08:09.797 ================ 00:08:09.797 Number of Power States: 1 00:08:09.797 Current Power State: Power State #0 00:08:09.797 Power State #0: 00:08:09.797 Max Power: 25.00 W 00:08:09.797 Non-Operational State: Operational 00:08:09.797 Entry Latency: 16 microseconds 00:08:09.797 Exit Latency: 4 microseconds 00:08:09.797 Relative Read Throughput: 0 00:08:09.797 Relative Read Latency: 0 00:08:09.797 Relative Write Throughput: 0 00:08:09.797 Relative Write Latency: 0 00:08:09.797 Idle Power: Not Reported 00:08:09.797 Active Power: Not Reported 00:08:09.797 Non-Operational Permissive Mode: Not Supported 00:08:09.797 00:08:09.797 Health Information 00:08:09.797 ================== 00:08:09.797 Critical Warnings: 00:08:09.797 Available Spare Space: OK 00:08:09.797 Temperature: OK 00:08:09.797 Device Reliability: OK 00:08:09.797 Read Only: No 00:08:09.797 Volatile Memory Backup: OK 00:08:09.797 Current Temperature: 323 Kelvin (50 Celsius) 00:08:09.797 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:09.797 Available Spare: 0% 00:08:09.797 Available Spare Threshold: 0% 00:08:09.797 Life Percentage Used: 0% 00:08:09.797 Data Units Read: 2249 00:08:09.797 Data Units Written: 2036 00:08:09.797 Host Read Commands: 118552 00:08:09.797 Host Write Commands: 116821 00:08:09.797 Controller Busy Time: 0 minutes 00:08:09.797 Power Cycles: 0 00:08:09.797 Power On Hours: 0 hours 00:08:09.797 Unsafe Shutdowns: 0 00:08:09.797 Unrecoverable Media Errors: 0 00:08:09.797 Lifetime Error Log Entries: 0 00:08:09.797 Warning Temperature Time: 0 minutes 00:08:09.797 Critical Temperature Time: 0 minutes 00:08:09.797 00:08:09.797 Number of Queues 00:08:09.797 ================ 00:08:09.797 Number of I/O Submission Queues: 64 00:08:09.797 Number of I/O Completion Queues: 64 00:08:09.797 00:08:09.797 ZNS Specific Controller Data 00:08:09.797 ============================ 00:08:09.797 Zone Append Size Limit: 0 00:08:09.797 00:08:09.797 00:08:09.797 Active Namespaces 00:08:09.797 ================= 00:08:09.797 Namespace ID:1 00:08:09.797 Error Recovery Timeout: Unlimited 00:08:09.797 Command Set Identifier: NVM (00h) 00:08:09.797 Deallocate: Supported 00:08:09.797 Deallocated/Unwritten Error: Supported 00:08:09.797 Deallocated Read Value: All 0x00 00:08:09.797 Deallocate in Write Zeroes: Not Supported 00:08:09.797 Deallocated Guard Field: 0xFFFF 00:08:09.797 Flush: Supported 00:08:09.797 Reservation: Not Supported 00:08:09.797 Namespace Sharing Capabilities: Private 00:08:09.797 Size (in LBAs): 1048576 (4GiB) 00:08:09.797 Capacity (in LBAs): 1048576 (4GiB) 00:08:09.797 Utilization (in LBAs): 1048576 (4GiB) 00:08:09.798 Thin Provisioning: Not Supported 00:08:09.798 Per-NS Atomic Units: No 00:08:09.798 Maximum Single Source Range Length: 128 00:08:09.798 Maximum Copy Length: 128 00:08:09.798 Maximum Source Range Count: 128 00:08:09.798 NGUID/EUI64 Never Reused: No 00:08:09.798 Namespace Write Protected: No 00:08:09.798 Number of LBA Formats: 8 00:08:09.798 Current LBA Format: LBA Format #04 00:08:09.798 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.798 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.798 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.798 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.798 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.798 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.798 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.798 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.798 00:08:09.798 NVM Specific Namespace Data 00:08:09.798 =========================== 00:08:09.798 Logical Block Storage Tag Mask: 0 00:08:09.798 Protection Information Capabilities: 00:08:09.798 16b Guard Protection Information Storage Tag Support: No 00:08:09.798 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.798 Storage Tag Check Read Support: No 00:08:09.798 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Namespace ID:2 00:08:09.798 Error Recovery Timeout: Unlimited 00:08:09.798 Command Set Identifier: NVM (00h) 00:08:09.798 Deallocate: Supported 00:08:09.798 Deallocated/Unwritten Error: Supported 00:08:09.798 Deallocated Read Value: All 0x00 00:08:09.798 Deallocate in Write Zeroes: Not Supported 00:08:09.798 Deallocated Guard Field: 0xFFFF 00:08:09.798 Flush: Supported 00:08:09.798 Reservation: Not Supported 00:08:09.798 Namespace Sharing Capabilities: Private 00:08:09.798 Size (in LBAs): 1048576 (4GiB) 00:08:09.798 Capacity (in LBAs): 1048576 (4GiB) 00:08:09.798 Utilization (in LBAs): 1048576 (4GiB) 00:08:09.798 Thin Provisioning: Not Supported 00:08:09.798 Per-NS Atomic Units: No 00:08:09.798 Maximum Single Source Range Length: 128 00:08:09.798 Maximum Copy Length: 128 00:08:09.798 Maximum Source Range Count: 128 00:08:09.798 NGUID/EUI64 Never Reused: No 00:08:09.798 Namespace Write Protected: No 00:08:09.798 Number of LBA Formats: 8 00:08:09.798 Current LBA Format: LBA Format #04 00:08:09.798 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.798 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.798 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.798 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.798 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.798 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.798 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.798 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.798 00:08:09.798 NVM Specific Namespace Data 00:08:09.798 =========================== 00:08:09.798 Logical Block Storage Tag Mask: 0 00:08:09.798 Protection Information Capabilities: 00:08:09.798 16b Guard Protection Information Storage Tag Support: No 00:08:09.798 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.798 Storage Tag Check Read Support: No 00:08:09.798 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Namespace ID:3 00:08:09.798 Error Recovery Timeout: Unlimited 00:08:09.798 Command Set Identifier: NVM (00h) 00:08:09.798 Deallocate: Supported 00:08:09.798 Deallocated/Unwritten Error: Supported 00:08:09.798 Deallocated Read Value: All 0x00 00:08:09.798 Deallocate in Write Zeroes: Not Supported 00:08:09.798 Deallocated Guard Field: 0xFFFF 00:08:09.798 Flush: Supported 00:08:09.798 Reservation: Not Supported 00:08:09.798 Namespace Sharing Capabilities: Private 00:08:09.798 Size (in LBAs): 1048576 (4GiB) 00:08:09.798 Capacity (in LBAs): 1048576 (4GiB) 00:08:09.798 Utilization (in LBAs): 1048576 (4GiB) 00:08:09.798 Thin Provisioning: Not Supported 00:08:09.798 Per-NS Atomic Units: No 00:08:09.798 Maximum Single Source Range Length: 128 00:08:09.798 Maximum Copy Length: 128 00:08:09.798 Maximum Source Range Count: 128 00:08:09.798 NGUID/EUI64 Never Reused: No 00:08:09.798 Namespace Write Protected: No 00:08:09.798 Number of LBA Formats: 8 00:08:09.798 Current LBA Format: LBA Format #04 00:08:09.798 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:09.798 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:09.798 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:09.798 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:09.798 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:09.798 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:09.798 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:09.798 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:09.798 00:08:09.798 NVM Specific Namespace Data 00:08:09.798 =========================== 00:08:09.798 Logical Block Storage Tag Mask: 0 00:08:09.798 Protection Information Capabilities: 00:08:09.798 16b Guard Protection Information Storage Tag Support: No 00:08:09.798 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:09.798 Storage Tag Check Read Support: No 00:08:09.798 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:09.798 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:09.798 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:10.057 ===================================================== 00:08:10.057 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:10.057 ===================================================== 00:08:10.057 Controller Capabilities/Features 00:08:10.057 ================================ 00:08:10.057 Vendor ID: 1b36 00:08:10.057 Subsystem Vendor ID: 1af4 00:08:10.057 Serial Number: 12340 00:08:10.057 Model Number: QEMU NVMe Ctrl 00:08:10.057 Firmware Version: 8.0.0 00:08:10.057 Recommended Arb Burst: 6 00:08:10.057 IEEE OUI Identifier: 00 54 52 00:08:10.057 Multi-path I/O 00:08:10.057 May have multiple subsystem ports: No 00:08:10.057 May have multiple controllers: No 00:08:10.057 Associated with SR-IOV VF: No 00:08:10.057 Max Data Transfer Size: 524288 00:08:10.057 Max Number of Namespaces: 256 00:08:10.057 Max Number of I/O Queues: 64 00:08:10.057 NVMe Specification Version (VS): 1.4 00:08:10.057 NVMe Specification Version (Identify): 1.4 00:08:10.057 Maximum Queue Entries: 2048 00:08:10.057 Contiguous Queues Required: Yes 00:08:10.057 Arbitration Mechanisms Supported 00:08:10.057 Weighted Round Robin: Not Supported 00:08:10.057 Vendor Specific: Not Supported 00:08:10.057 Reset Timeout: 7500 ms 00:08:10.057 Doorbell Stride: 4 bytes 00:08:10.057 NVM Subsystem Reset: Not Supported 00:08:10.057 Command Sets Supported 00:08:10.057 NVM Command Set: Supported 00:08:10.057 Boot Partition: Not Supported 00:08:10.057 Memory Page Size Minimum: 4096 bytes 00:08:10.057 Memory Page Size Maximum: 65536 bytes 00:08:10.057 Persistent Memory Region: Not Supported 00:08:10.057 Optional Asynchronous Events Supported 00:08:10.057 Namespace Attribute Notices: Supported 00:08:10.057 Firmware Activation Notices: Not Supported 00:08:10.057 ANA Change Notices: Not Supported 00:08:10.057 PLE Aggregate Log Change Notices: Not Supported 00:08:10.057 LBA Status Info Alert Notices: Not Supported 00:08:10.057 EGE Aggregate Log Change Notices: Not Supported 00:08:10.057 Normal NVM Subsystem Shutdown event: Not Supported 00:08:10.057 Zone Descriptor Change Notices: Not Supported 00:08:10.057 Discovery Log Change Notices: Not Supported 00:08:10.057 Controller Attributes 00:08:10.057 128-bit Host Identifier: Not Supported 00:08:10.057 Non-Operational Permissive Mode: Not Supported 00:08:10.057 NVM Sets: Not Supported 00:08:10.057 Read Recovery Levels: Not Supported 00:08:10.057 Endurance Groups: Not Supported 00:08:10.057 Predictable Latency Mode: Not Supported 00:08:10.057 Traffic Based Keep ALive: Not Supported 00:08:10.057 Namespace Granularity: Not Supported 00:08:10.057 SQ Associations: Not Supported 00:08:10.057 UUID List: Not Supported 00:08:10.057 Multi-Domain Subsystem: Not Supported 00:08:10.057 Fixed Capacity Management: Not Supported 00:08:10.057 Variable Capacity Management: Not Supported 00:08:10.057 Delete Endurance Group: Not Supported 00:08:10.057 Delete NVM Set: Not Supported 00:08:10.057 Extended LBA Formats Supported: Supported 00:08:10.058 Flexible Data Placement Supported: Not Supported 00:08:10.058 00:08:10.058 Controller Memory Buffer Support 00:08:10.058 ================================ 00:08:10.058 Supported: No 00:08:10.058 00:08:10.058 Persistent Memory Region Support 00:08:10.058 ================================ 00:08:10.058 Supported: No 00:08:10.058 00:08:10.058 Admin Command Set Attributes 00:08:10.058 ============================ 00:08:10.058 Security Send/Receive: Not Supported 00:08:10.058 Format NVM: Supported 00:08:10.058 Firmware Activate/Download: Not Supported 00:08:10.058 Namespace Management: Supported 00:08:10.058 Device Self-Test: Not Supported 00:08:10.058 Directives: Supported 00:08:10.058 NVMe-MI: Not Supported 00:08:10.058 Virtualization Management: Not Supported 00:08:10.058 Doorbell Buffer Config: Supported 00:08:10.058 Get LBA Status Capability: Not Supported 00:08:10.058 Command & Feature Lockdown Capability: Not Supported 00:08:10.058 Abort Command Limit: 4 00:08:10.058 Async Event Request Limit: 4 00:08:10.058 Number of Firmware Slots: N/A 00:08:10.058 Firmware Slot 1 Read-Only: N/A 00:08:10.058 Firmware Activation Without Reset: N/A 00:08:10.058 Multiple Update Detection Support: N/A 00:08:10.058 Firmware Update Granularity: No Information Provided 00:08:10.058 Per-Namespace SMART Log: Yes 00:08:10.058 Asymmetric Namespace Access Log Page: Not Supported 00:08:10.058 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:10.058 Command Effects Log Page: Supported 00:08:10.058 Get Log Page Extended Data: Supported 00:08:10.058 Telemetry Log Pages: Not Supported 00:08:10.058 Persistent Event Log Pages: Not Supported 00:08:10.058 Supported Log Pages Log Page: May Support 00:08:10.058 Commands Supported & Effects Log Page: Not Supported 00:08:10.058 Feature Identifiers & Effects Log Page:May Support 00:08:10.058 NVMe-MI Commands & Effects Log Page: May Support 00:08:10.058 Data Area 4 for Telemetry Log: Not Supported 00:08:10.058 Error Log Page Entries Supported: 1 00:08:10.058 Keep Alive: Not Supported 00:08:10.058 00:08:10.058 NVM Command Set Attributes 00:08:10.058 ========================== 00:08:10.058 Submission Queue Entry Size 00:08:10.058 Max: 64 00:08:10.058 Min: 64 00:08:10.058 Completion Queue Entry Size 00:08:10.058 Max: 16 00:08:10.058 Min: 16 00:08:10.058 Number of Namespaces: 256 00:08:10.058 Compare Command: Supported 00:08:10.058 Write Uncorrectable Command: Not Supported 00:08:10.058 Dataset Management Command: Supported 00:08:10.058 Write Zeroes Command: Supported 00:08:10.058 Set Features Save Field: Supported 00:08:10.058 Reservations: Not Supported 00:08:10.058 Timestamp: Supported 00:08:10.058 Copy: Supported 00:08:10.058 Volatile Write Cache: Present 00:08:10.058 Atomic Write Unit (Normal): 1 00:08:10.058 Atomic Write Unit (PFail): 1 00:08:10.058 Atomic Compare & Write Unit: 1 00:08:10.058 Fused Compare & Write: Not Supported 00:08:10.058 Scatter-Gather List 00:08:10.058 SGL Command Set: Supported 00:08:10.058 SGL Keyed: Not Supported 00:08:10.058 SGL Bit Bucket Descriptor: Not Supported 00:08:10.058 SGL Metadata Pointer: Not Supported 00:08:10.058 Oversized SGL: Not Supported 00:08:10.058 SGL Metadata Address: Not Supported 00:08:10.058 SGL Offset: Not Supported 00:08:10.058 Transport SGL Data Block: Not Supported 00:08:10.058 Replay Protected Memory Block: Not Supported 00:08:10.058 00:08:10.058 Firmware Slot Information 00:08:10.058 ========================= 00:08:10.058 Active slot: 1 00:08:10.058 Slot 1 Firmware Revision: 1.0 00:08:10.058 00:08:10.058 00:08:10.058 Commands Supported and Effects 00:08:10.058 ============================== 00:08:10.058 Admin Commands 00:08:10.058 -------------- 00:08:10.058 Delete I/O Submission Queue (00h): Supported 00:08:10.058 Create I/O Submission Queue (01h): Supported 00:08:10.058 Get Log Page (02h): Supported 00:08:10.058 Delete I/O Completion Queue (04h): Supported 00:08:10.058 Create I/O Completion Queue (05h): Supported 00:08:10.058 Identify (06h): Supported 00:08:10.058 Abort (08h): Supported 00:08:10.058 Set Features (09h): Supported 00:08:10.058 Get Features (0Ah): Supported 00:08:10.058 Asynchronous Event Request (0Ch): Supported 00:08:10.058 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:10.058 Directive Send (19h): Supported 00:08:10.058 Directive Receive (1Ah): Supported 00:08:10.058 Virtualization Management (1Ch): Supported 00:08:10.058 Doorbell Buffer Config (7Ch): Supported 00:08:10.058 Format NVM (80h): Supported LBA-Change 00:08:10.058 I/O Commands 00:08:10.058 ------------ 00:08:10.058 Flush (00h): Supported LBA-Change 00:08:10.058 Write (01h): Supported LBA-Change 00:08:10.058 Read (02h): Supported 00:08:10.058 Compare (05h): Supported 00:08:10.058 Write Zeroes (08h): Supported LBA-Change 00:08:10.058 Dataset Management (09h): Supported LBA-Change 00:08:10.058 Unknown (0Ch): Supported 00:08:10.058 Unknown (12h): Supported 00:08:10.058 Copy (19h): Supported LBA-Change 00:08:10.058 Unknown (1Dh): Supported LBA-Change 00:08:10.058 00:08:10.058 Error Log 00:08:10.058 ========= 00:08:10.058 00:08:10.058 Arbitration 00:08:10.058 =========== 00:08:10.058 Arbitration Burst: no limit 00:08:10.058 00:08:10.058 Power Management 00:08:10.058 ================ 00:08:10.058 Number of Power States: 1 00:08:10.058 Current Power State: Power State #0 00:08:10.058 Power State #0: 00:08:10.058 Max Power: 25.00 W 00:08:10.058 Non-Operational State: Operational 00:08:10.058 Entry Latency: 16 microseconds 00:08:10.058 Exit Latency: 4 microseconds 00:08:10.058 Relative Read Throughput: 0 00:08:10.058 Relative Read Latency: 0 00:08:10.058 Relative Write Throughput: 0 00:08:10.058 Relative Write Latency: 0 00:08:10.058 Idle Power: Not Reported 00:08:10.058 Active Power: Not Reported 00:08:10.058 Non-Operational Permissive Mode: Not Supported 00:08:10.058 00:08:10.058 Health Information 00:08:10.058 ================== 00:08:10.058 Critical Warnings: 00:08:10.058 Available Spare Space: OK 00:08:10.058 Temperature: OK 00:08:10.058 Device Reliability: OK 00:08:10.058 Read Only: No 00:08:10.058 Volatile Memory Backup: OK 00:08:10.058 Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.058 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:10.058 Available Spare: 0% 00:08:10.058 Available Spare Threshold: 0% 00:08:10.058 Life Percentage Used: 0% 00:08:10.058 Data Units Read: 691 00:08:10.058 Data Units Written: 619 00:08:10.058 Host Read Commands: 38654 00:08:10.058 Host Write Commands: 38440 00:08:10.058 Controller Busy Time: 0 minutes 00:08:10.058 Power Cycles: 0 00:08:10.058 Power On Hours: 0 hours 00:08:10.058 Unsafe Shutdowns: 0 00:08:10.058 Unrecoverable Media Errors: 0 00:08:10.058 Lifetime Error Log Entries: 0 00:08:10.058 Warning Temperature Time: 0 minutes 00:08:10.058 Critical Temperature Time: 0 minutes 00:08:10.058 00:08:10.058 Number of Queues 00:08:10.058 ================ 00:08:10.058 Number of I/O Submission Queues: 64 00:08:10.058 Number of I/O Completion Queues: 64 00:08:10.058 00:08:10.058 ZNS Specific Controller Data 00:08:10.058 ============================ 00:08:10.058 Zone Append Size Limit: 0 00:08:10.058 00:08:10.058 00:08:10.058 Active Namespaces 00:08:10.058 ================= 00:08:10.058 Namespace ID:1 00:08:10.058 Error Recovery Timeout: Unlimited 00:08:10.058 Command Set Identifier: NVM (00h) 00:08:10.058 Deallocate: Supported 00:08:10.058 Deallocated/Unwritten Error: Supported 00:08:10.058 Deallocated Read Value: All 0x00 00:08:10.058 Deallocate in Write Zeroes: Not Supported 00:08:10.058 Deallocated Guard Field: 0xFFFF 00:08:10.058 Flush: Supported 00:08:10.058 Reservation: Not Supported 00:08:10.058 Metadata Transferred as: Separate Metadata Buffer 00:08:10.058 Namespace Sharing Capabilities: Private 00:08:10.058 Size (in LBAs): 1548666 (5GiB) 00:08:10.058 Capacity (in LBAs): 1548666 (5GiB) 00:08:10.058 Utilization (in LBAs): 1548666 (5GiB) 00:08:10.058 Thin Provisioning: Not Supported 00:08:10.058 Per-NS Atomic Units: No 00:08:10.058 Maximum Single Source Range Length: 128 00:08:10.058 Maximum Copy Length: 128 00:08:10.058 Maximum Source Range Count: 128 00:08:10.058 NGUID/EUI64 Never Reused: No 00:08:10.058 Namespace Write Protected: No 00:08:10.058 Number of LBA Formats: 8 00:08:10.058 Current LBA Format: LBA Format #07 00:08:10.058 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.058 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.058 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.058 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.058 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.058 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.058 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.058 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.058 00:08:10.058 NVM Specific Namespace Data 00:08:10.058 =========================== 00:08:10.059 Logical Block Storage Tag Mask: 0 00:08:10.059 Protection Information Capabilities: 00:08:10.059 16b Guard Protection Information Storage Tag Support: No 00:08:10.059 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.059 Storage Tag Check Read Support: No 00:08:10.059 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.059 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:10.059 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:10.318 ===================================================== 00:08:10.318 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:10.318 ===================================================== 00:08:10.318 Controller Capabilities/Features 00:08:10.318 ================================ 00:08:10.318 Vendor ID: 1b36 00:08:10.318 Subsystem Vendor ID: 1af4 00:08:10.318 Serial Number: 12341 00:08:10.318 Model Number: QEMU NVMe Ctrl 00:08:10.318 Firmware Version: 8.0.0 00:08:10.318 Recommended Arb Burst: 6 00:08:10.318 IEEE OUI Identifier: 00 54 52 00:08:10.318 Multi-path I/O 00:08:10.318 May have multiple subsystem ports: No 00:08:10.318 May have multiple controllers: No 00:08:10.318 Associated with SR-IOV VF: No 00:08:10.318 Max Data Transfer Size: 524288 00:08:10.318 Max Number of Namespaces: 256 00:08:10.318 Max Number of I/O Queues: 64 00:08:10.318 NVMe Specification Version (VS): 1.4 00:08:10.318 NVMe Specification Version (Identify): 1.4 00:08:10.318 Maximum Queue Entries: 2048 00:08:10.318 Contiguous Queues Required: Yes 00:08:10.318 Arbitration Mechanisms Supported 00:08:10.318 Weighted Round Robin: Not Supported 00:08:10.318 Vendor Specific: Not Supported 00:08:10.318 Reset Timeout: 7500 ms 00:08:10.318 Doorbell Stride: 4 bytes 00:08:10.318 NVM Subsystem Reset: Not Supported 00:08:10.318 Command Sets Supported 00:08:10.318 NVM Command Set: Supported 00:08:10.318 Boot Partition: Not Supported 00:08:10.318 Memory Page Size Minimum: 4096 bytes 00:08:10.318 Memory Page Size Maximum: 65536 bytes 00:08:10.318 Persistent Memory Region: Not Supported 00:08:10.318 Optional Asynchronous Events Supported 00:08:10.318 Namespace Attribute Notices: Supported 00:08:10.318 Firmware Activation Notices: Not Supported 00:08:10.318 ANA Change Notices: Not Supported 00:08:10.318 PLE Aggregate Log Change Notices: Not Supported 00:08:10.318 LBA Status Info Alert Notices: Not Supported 00:08:10.318 EGE Aggregate Log Change Notices: Not Supported 00:08:10.318 Normal NVM Subsystem Shutdown event: Not Supported 00:08:10.318 Zone Descriptor Change Notices: Not Supported 00:08:10.318 Discovery Log Change Notices: Not Supported 00:08:10.318 Controller Attributes 00:08:10.318 128-bit Host Identifier: Not Supported 00:08:10.318 Non-Operational Permissive Mode: Not Supported 00:08:10.318 NVM Sets: Not Supported 00:08:10.318 Read Recovery Levels: Not Supported 00:08:10.318 Endurance Groups: Not Supported 00:08:10.318 Predictable Latency Mode: Not Supported 00:08:10.318 Traffic Based Keep ALive: Not Supported 00:08:10.318 Namespace Granularity: Not Supported 00:08:10.318 SQ Associations: Not Supported 00:08:10.318 UUID List: Not Supported 00:08:10.318 Multi-Domain Subsystem: Not Supported 00:08:10.318 Fixed Capacity Management: Not Supported 00:08:10.318 Variable Capacity Management: Not Supported 00:08:10.318 Delete Endurance Group: Not Supported 00:08:10.318 Delete NVM Set: Not Supported 00:08:10.318 Extended LBA Formats Supported: Supported 00:08:10.318 Flexible Data Placement Supported: Not Supported 00:08:10.318 00:08:10.318 Controller Memory Buffer Support 00:08:10.318 ================================ 00:08:10.318 Supported: No 00:08:10.318 00:08:10.318 Persistent Memory Region Support 00:08:10.318 ================================ 00:08:10.318 Supported: No 00:08:10.318 00:08:10.318 Admin Command Set Attributes 00:08:10.318 ============================ 00:08:10.318 Security Send/Receive: Not Supported 00:08:10.318 Format NVM: Supported 00:08:10.318 Firmware Activate/Download: Not Supported 00:08:10.318 Namespace Management: Supported 00:08:10.318 Device Self-Test: Not Supported 00:08:10.318 Directives: Supported 00:08:10.318 NVMe-MI: Not Supported 00:08:10.318 Virtualization Management: Not Supported 00:08:10.318 Doorbell Buffer Config: Supported 00:08:10.318 Get LBA Status Capability: Not Supported 00:08:10.318 Command & Feature Lockdown Capability: Not Supported 00:08:10.318 Abort Command Limit: 4 00:08:10.318 Async Event Request Limit: 4 00:08:10.318 Number of Firmware Slots: N/A 00:08:10.318 Firmware Slot 1 Read-Only: N/A 00:08:10.318 Firmware Activation Without Reset: N/A 00:08:10.318 Multiple Update Detection Support: N/A 00:08:10.318 Firmware Update Granularity: No Information Provided 00:08:10.318 Per-Namespace SMART Log: Yes 00:08:10.318 Asymmetric Namespace Access Log Page: Not Supported 00:08:10.318 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:10.318 Command Effects Log Page: Supported 00:08:10.318 Get Log Page Extended Data: Supported 00:08:10.318 Telemetry Log Pages: Not Supported 00:08:10.318 Persistent Event Log Pages: Not Supported 00:08:10.318 Supported Log Pages Log Page: May Support 00:08:10.318 Commands Supported & Effects Log Page: Not Supported 00:08:10.318 Feature Identifiers & Effects Log Page:May Support 00:08:10.318 NVMe-MI Commands & Effects Log Page: May Support 00:08:10.318 Data Area 4 for Telemetry Log: Not Supported 00:08:10.318 Error Log Page Entries Supported: 1 00:08:10.318 Keep Alive: Not Supported 00:08:10.318 00:08:10.318 NVM Command Set Attributes 00:08:10.318 ========================== 00:08:10.318 Submission Queue Entry Size 00:08:10.318 Max: 64 00:08:10.318 Min: 64 00:08:10.318 Completion Queue Entry Size 00:08:10.318 Max: 16 00:08:10.318 Min: 16 00:08:10.318 Number of Namespaces: 256 00:08:10.318 Compare Command: Supported 00:08:10.318 Write Uncorrectable Command: Not Supported 00:08:10.318 Dataset Management Command: Supported 00:08:10.318 Write Zeroes Command: Supported 00:08:10.318 Set Features Save Field: Supported 00:08:10.318 Reservations: Not Supported 00:08:10.318 Timestamp: Supported 00:08:10.318 Copy: Supported 00:08:10.318 Volatile Write Cache: Present 00:08:10.318 Atomic Write Unit (Normal): 1 00:08:10.318 Atomic Write Unit (PFail): 1 00:08:10.318 Atomic Compare & Write Unit: 1 00:08:10.318 Fused Compare & Write: Not Supported 00:08:10.318 Scatter-Gather List 00:08:10.318 SGL Command Set: Supported 00:08:10.318 SGL Keyed: Not Supported 00:08:10.318 SGL Bit Bucket Descriptor: Not Supported 00:08:10.318 SGL Metadata Pointer: Not Supported 00:08:10.318 Oversized SGL: Not Supported 00:08:10.318 SGL Metadata Address: Not Supported 00:08:10.318 SGL Offset: Not Supported 00:08:10.318 Transport SGL Data Block: Not Supported 00:08:10.318 Replay Protected Memory Block: Not Supported 00:08:10.318 00:08:10.318 Firmware Slot Information 00:08:10.318 ========================= 00:08:10.318 Active slot: 1 00:08:10.318 Slot 1 Firmware Revision: 1.0 00:08:10.318 00:08:10.318 00:08:10.318 Commands Supported and Effects 00:08:10.318 ============================== 00:08:10.318 Admin Commands 00:08:10.318 -------------- 00:08:10.318 Delete I/O Submission Queue (00h): Supported 00:08:10.318 Create I/O Submission Queue (01h): Supported 00:08:10.318 Get Log Page (02h): Supported 00:08:10.318 Delete I/O Completion Queue (04h): Supported 00:08:10.318 Create I/O Completion Queue (05h): Supported 00:08:10.318 Identify (06h): Supported 00:08:10.318 Abort (08h): Supported 00:08:10.318 Set Features (09h): Supported 00:08:10.318 Get Features (0Ah): Supported 00:08:10.318 Asynchronous Event Request (0Ch): Supported 00:08:10.318 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:10.318 Directive Send (19h): Supported 00:08:10.318 Directive Receive (1Ah): Supported 00:08:10.318 Virtualization Management (1Ch): Supported 00:08:10.318 Doorbell Buffer Config (7Ch): Supported 00:08:10.319 Format NVM (80h): Supported LBA-Change 00:08:10.319 I/O Commands 00:08:10.319 ------------ 00:08:10.319 Flush (00h): Supported LBA-Change 00:08:10.319 Write (01h): Supported LBA-Change 00:08:10.319 Read (02h): Supported 00:08:10.319 Compare (05h): Supported 00:08:10.319 Write Zeroes (08h): Supported LBA-Change 00:08:10.319 Dataset Management (09h): Supported LBA-Change 00:08:10.319 Unknown (0Ch): Supported 00:08:10.319 Unknown (12h): Supported 00:08:10.319 Copy (19h): Supported LBA-Change 00:08:10.319 Unknown (1Dh): Supported LBA-Change 00:08:10.319 00:08:10.319 Error Log 00:08:10.319 ========= 00:08:10.319 00:08:10.319 Arbitration 00:08:10.319 =========== 00:08:10.319 Arbitration Burst: no limit 00:08:10.319 00:08:10.319 Power Management 00:08:10.319 ================ 00:08:10.319 Number of Power States: 1 00:08:10.319 Current Power State: Power State #0 00:08:10.319 Power State #0: 00:08:10.319 Max Power: 25.00 W 00:08:10.319 Non-Operational State: Operational 00:08:10.319 Entry Latency: 16 microseconds 00:08:10.319 Exit Latency: 4 microseconds 00:08:10.319 Relative Read Throughput: 0 00:08:10.319 Relative Read Latency: 0 00:08:10.319 Relative Write Throughput: 0 00:08:10.319 Relative Write Latency: 0 00:08:10.319 Idle Power: Not Reported 00:08:10.319 Active Power: Not Reported 00:08:10.319 Non-Operational Permissive Mode: Not Supported 00:08:10.319 00:08:10.319 Health Information 00:08:10.319 ================== 00:08:10.319 Critical Warnings: 00:08:10.319 Available Spare Space: OK 00:08:10.319 Temperature: OK 00:08:10.319 Device Reliability: OK 00:08:10.319 Read Only: No 00:08:10.319 Volatile Memory Backup: OK 00:08:10.319 Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.319 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:10.319 Available Spare: 0% 00:08:10.319 Available Spare Threshold: 0% 00:08:10.319 Life Percentage Used: 0% 00:08:10.319 Data Units Read: 1067 00:08:10.319 Data Units Written: 935 00:08:10.319 Host Read Commands: 56655 00:08:10.319 Host Write Commands: 55433 00:08:10.319 Controller Busy Time: 0 minutes 00:08:10.319 Power Cycles: 0 00:08:10.319 Power On Hours: 0 hours 00:08:10.319 Unsafe Shutdowns: 0 00:08:10.319 Unrecoverable Media Errors: 0 00:08:10.319 Lifetime Error Log Entries: 0 00:08:10.319 Warning Temperature Time: 0 minutes 00:08:10.319 Critical Temperature Time: 0 minutes 00:08:10.319 00:08:10.319 Number of Queues 00:08:10.319 ================ 00:08:10.319 Number of I/O Submission Queues: 64 00:08:10.319 Number of I/O Completion Queues: 64 00:08:10.319 00:08:10.319 ZNS Specific Controller Data 00:08:10.319 ============================ 00:08:10.319 Zone Append Size Limit: 0 00:08:10.319 00:08:10.319 00:08:10.319 Active Namespaces 00:08:10.319 ================= 00:08:10.319 Namespace ID:1 00:08:10.319 Error Recovery Timeout: Unlimited 00:08:10.319 Command Set Identifier: NVM (00h) 00:08:10.319 Deallocate: Supported 00:08:10.319 Deallocated/Unwritten Error: Supported 00:08:10.319 Deallocated Read Value: All 0x00 00:08:10.319 Deallocate in Write Zeroes: Not Supported 00:08:10.319 Deallocated Guard Field: 0xFFFF 00:08:10.319 Flush: Supported 00:08:10.319 Reservation: Not Supported 00:08:10.319 Namespace Sharing Capabilities: Private 00:08:10.319 Size (in LBAs): 1310720 (5GiB) 00:08:10.319 Capacity (in LBAs): 1310720 (5GiB) 00:08:10.319 Utilization (in LBAs): 1310720 (5GiB) 00:08:10.319 Thin Provisioning: Not Supported 00:08:10.319 Per-NS Atomic Units: No 00:08:10.319 Maximum Single Source Range Length: 128 00:08:10.319 Maximum Copy Length: 128 00:08:10.319 Maximum Source Range Count: 128 00:08:10.319 NGUID/EUI64 Never Reused: No 00:08:10.319 Namespace Write Protected: No 00:08:10.319 Number of LBA Formats: 8 00:08:10.319 Current LBA Format: LBA Format #04 00:08:10.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.319 00:08:10.319 NVM Specific Namespace Data 00:08:10.319 =========================== 00:08:10.319 Logical Block Storage Tag Mask: 0 00:08:10.319 Protection Information Capabilities: 00:08:10.319 16b Guard Protection Information Storage Tag Support: No 00:08:10.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.319 Storage Tag Check Read Support: No 00:08:10.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.319 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:10.319 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:10.578 ===================================================== 00:08:10.578 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:10.578 ===================================================== 00:08:10.578 Controller Capabilities/Features 00:08:10.578 ================================ 00:08:10.578 Vendor ID: 1b36 00:08:10.578 Subsystem Vendor ID: 1af4 00:08:10.578 Serial Number: 12342 00:08:10.578 Model Number: QEMU NVMe Ctrl 00:08:10.578 Firmware Version: 8.0.0 00:08:10.578 Recommended Arb Burst: 6 00:08:10.578 IEEE OUI Identifier: 00 54 52 00:08:10.578 Multi-path I/O 00:08:10.578 May have multiple subsystem ports: No 00:08:10.578 May have multiple controllers: No 00:08:10.578 Associated with SR-IOV VF: No 00:08:10.578 Max Data Transfer Size: 524288 00:08:10.578 Max Number of Namespaces: 256 00:08:10.578 Max Number of I/O Queues: 64 00:08:10.578 NVMe Specification Version (VS): 1.4 00:08:10.578 NVMe Specification Version (Identify): 1.4 00:08:10.578 Maximum Queue Entries: 2048 00:08:10.578 Contiguous Queues Required: Yes 00:08:10.578 Arbitration Mechanisms Supported 00:08:10.578 Weighted Round Robin: Not Supported 00:08:10.578 Vendor Specific: Not Supported 00:08:10.578 Reset Timeout: 7500 ms 00:08:10.578 Doorbell Stride: 4 bytes 00:08:10.578 NVM Subsystem Reset: Not Supported 00:08:10.578 Command Sets Supported 00:08:10.578 NVM Command Set: Supported 00:08:10.578 Boot Partition: Not Supported 00:08:10.578 Memory Page Size Minimum: 4096 bytes 00:08:10.578 Memory Page Size Maximum: 65536 bytes 00:08:10.578 Persistent Memory Region: Not Supported 00:08:10.578 Optional Asynchronous Events Supported 00:08:10.578 Namespace Attribute Notices: Supported 00:08:10.578 Firmware Activation Notices: Not Supported 00:08:10.578 ANA Change Notices: Not Supported 00:08:10.578 PLE Aggregate Log Change Notices: Not Supported 00:08:10.579 LBA Status Info Alert Notices: Not Supported 00:08:10.579 EGE Aggregate Log Change Notices: Not Supported 00:08:10.579 Normal NVM Subsystem Shutdown event: Not Supported 00:08:10.579 Zone Descriptor Change Notices: Not Supported 00:08:10.579 Discovery Log Change Notices: Not Supported 00:08:10.579 Controller Attributes 00:08:10.579 128-bit Host Identifier: Not Supported 00:08:10.579 Non-Operational Permissive Mode: Not Supported 00:08:10.579 NVM Sets: Not Supported 00:08:10.579 Read Recovery Levels: Not Supported 00:08:10.579 Endurance Groups: Not Supported 00:08:10.579 Predictable Latency Mode: Not Supported 00:08:10.579 Traffic Based Keep ALive: Not Supported 00:08:10.579 Namespace Granularity: Not Supported 00:08:10.579 SQ Associations: Not Supported 00:08:10.579 UUID List: Not Supported 00:08:10.579 Multi-Domain Subsystem: Not Supported 00:08:10.579 Fixed Capacity Management: Not Supported 00:08:10.579 Variable Capacity Management: Not Supported 00:08:10.579 Delete Endurance Group: Not Supported 00:08:10.579 Delete NVM Set: Not Supported 00:08:10.579 Extended LBA Formats Supported: Supported 00:08:10.579 Flexible Data Placement Supported: Not Supported 00:08:10.579 00:08:10.579 Controller Memory Buffer Support 00:08:10.579 ================================ 00:08:10.579 Supported: No 00:08:10.579 00:08:10.579 Persistent Memory Region Support 00:08:10.579 ================================ 00:08:10.579 Supported: No 00:08:10.579 00:08:10.579 Admin Command Set Attributes 00:08:10.579 ============================ 00:08:10.579 Security Send/Receive: Not Supported 00:08:10.579 Format NVM: Supported 00:08:10.579 Firmware Activate/Download: Not Supported 00:08:10.579 Namespace Management: Supported 00:08:10.579 Device Self-Test: Not Supported 00:08:10.579 Directives: Supported 00:08:10.579 NVMe-MI: Not Supported 00:08:10.579 Virtualization Management: Not Supported 00:08:10.579 Doorbell Buffer Config: Supported 00:08:10.579 Get LBA Status Capability: Not Supported 00:08:10.579 Command & Feature Lockdown Capability: Not Supported 00:08:10.579 Abort Command Limit: 4 00:08:10.579 Async Event Request Limit: 4 00:08:10.579 Number of Firmware Slots: N/A 00:08:10.579 Firmware Slot 1 Read-Only: N/A 00:08:10.579 Firmware Activation Without Reset: N/A 00:08:10.579 Multiple Update Detection Support: N/A 00:08:10.579 Firmware Update Granularity: No Information Provided 00:08:10.579 Per-Namespace SMART Log: Yes 00:08:10.579 Asymmetric Namespace Access Log Page: Not Supported 00:08:10.579 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:10.579 Command Effects Log Page: Supported 00:08:10.579 Get Log Page Extended Data: Supported 00:08:10.579 Telemetry Log Pages: Not Supported 00:08:10.579 Persistent Event Log Pages: Not Supported 00:08:10.579 Supported Log Pages Log Page: May Support 00:08:10.579 Commands Supported & Effects Log Page: Not Supported 00:08:10.579 Feature Identifiers & Effects Log Page:May Support 00:08:10.579 NVMe-MI Commands & Effects Log Page: May Support 00:08:10.579 Data Area 4 for Telemetry Log: Not Supported 00:08:10.579 Error Log Page Entries Supported: 1 00:08:10.579 Keep Alive: Not Supported 00:08:10.579 00:08:10.579 NVM Command Set Attributes 00:08:10.579 ========================== 00:08:10.579 Submission Queue Entry Size 00:08:10.579 Max: 64 00:08:10.579 Min: 64 00:08:10.579 Completion Queue Entry Size 00:08:10.579 Max: 16 00:08:10.579 Min: 16 00:08:10.579 Number of Namespaces: 256 00:08:10.579 Compare Command: Supported 00:08:10.579 Write Uncorrectable Command: Not Supported 00:08:10.579 Dataset Management Command: Supported 00:08:10.579 Write Zeroes Command: Supported 00:08:10.579 Set Features Save Field: Supported 00:08:10.579 Reservations: Not Supported 00:08:10.579 Timestamp: Supported 00:08:10.579 Copy: Supported 00:08:10.579 Volatile Write Cache: Present 00:08:10.579 Atomic Write Unit (Normal): 1 00:08:10.579 Atomic Write Unit (PFail): 1 00:08:10.579 Atomic Compare & Write Unit: 1 00:08:10.579 Fused Compare & Write: Not Supported 00:08:10.579 Scatter-Gather List 00:08:10.579 SGL Command Set: Supported 00:08:10.579 SGL Keyed: Not Supported 00:08:10.579 SGL Bit Bucket Descriptor: Not Supported 00:08:10.579 SGL Metadata Pointer: Not Supported 00:08:10.579 Oversized SGL: Not Supported 00:08:10.579 SGL Metadata Address: Not Supported 00:08:10.579 SGL Offset: Not Supported 00:08:10.579 Transport SGL Data Block: Not Supported 00:08:10.579 Replay Protected Memory Block: Not Supported 00:08:10.579 00:08:10.579 Firmware Slot Information 00:08:10.579 ========================= 00:08:10.579 Active slot: 1 00:08:10.579 Slot 1 Firmware Revision: 1.0 00:08:10.579 00:08:10.579 00:08:10.579 Commands Supported and Effects 00:08:10.579 ============================== 00:08:10.579 Admin Commands 00:08:10.579 -------------- 00:08:10.579 Delete I/O Submission Queue (00h): Supported 00:08:10.579 Create I/O Submission Queue (01h): Supported 00:08:10.579 Get Log Page (02h): Supported 00:08:10.579 Delete I/O Completion Queue (04h): Supported 00:08:10.579 Create I/O Completion Queue (05h): Supported 00:08:10.579 Identify (06h): Supported 00:08:10.579 Abort (08h): Supported 00:08:10.579 Set Features (09h): Supported 00:08:10.579 Get Features (0Ah): Supported 00:08:10.579 Asynchronous Event Request (0Ch): Supported 00:08:10.579 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:10.579 Directive Send (19h): Supported 00:08:10.579 Directive Receive (1Ah): Supported 00:08:10.579 Virtualization Management (1Ch): Supported 00:08:10.579 Doorbell Buffer Config (7Ch): Supported 00:08:10.579 Format NVM (80h): Supported LBA-Change 00:08:10.579 I/O Commands 00:08:10.579 ------------ 00:08:10.579 Flush (00h): Supported LBA-Change 00:08:10.579 Write (01h): Supported LBA-Change 00:08:10.579 Read (02h): Supported 00:08:10.579 Compare (05h): Supported 00:08:10.579 Write Zeroes (08h): Supported LBA-Change 00:08:10.579 Dataset Management (09h): Supported LBA-Change 00:08:10.579 Unknown (0Ch): Supported 00:08:10.579 Unknown (12h): Supported 00:08:10.579 Copy (19h): Supported LBA-Change 00:08:10.579 Unknown (1Dh): Supported LBA-Change 00:08:10.579 00:08:10.579 Error Log 00:08:10.579 ========= 00:08:10.579 00:08:10.579 Arbitration 00:08:10.579 =========== 00:08:10.579 Arbitration Burst: no limit 00:08:10.579 00:08:10.579 Power Management 00:08:10.579 ================ 00:08:10.579 Number of Power States: 1 00:08:10.579 Current Power State: Power State #0 00:08:10.579 Power State #0: 00:08:10.579 Max Power: 25.00 W 00:08:10.579 Non-Operational State: Operational 00:08:10.579 Entry Latency: 16 microseconds 00:08:10.579 Exit Latency: 4 microseconds 00:08:10.579 Relative Read Throughput: 0 00:08:10.579 Relative Read Latency: 0 00:08:10.579 Relative Write Throughput: 0 00:08:10.579 Relative Write Latency: 0 00:08:10.579 Idle Power: Not Reported 00:08:10.579 Active Power: Not Reported 00:08:10.579 Non-Operational Permissive Mode: Not Supported 00:08:10.579 00:08:10.579 Health Information 00:08:10.579 ================== 00:08:10.579 Critical Warnings: 00:08:10.579 Available Spare Space: OK 00:08:10.579 Temperature: OK 00:08:10.579 Device Reliability: OK 00:08:10.579 Read Only: No 00:08:10.579 Volatile Memory Backup: OK 00:08:10.579 Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.579 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:10.579 Available Spare: 0% 00:08:10.579 Available Spare Threshold: 0% 00:08:10.579 Life Percentage Used: 0% 00:08:10.579 Data Units Read: 2249 00:08:10.579 Data Units Written: 2036 00:08:10.579 Host Read Commands: 118552 00:08:10.579 Host Write Commands: 116821 00:08:10.579 Controller Busy Time: 0 minutes 00:08:10.579 Power Cycles: 0 00:08:10.579 Power On Hours: 0 hours 00:08:10.579 Unsafe Shutdowns: 0 00:08:10.579 Unrecoverable Media Errors: 0 00:08:10.579 Lifetime Error Log Entries: 0 00:08:10.579 Warning Temperature Time: 0 minutes 00:08:10.580 Critical Temperature Time: 0 minutes 00:08:10.580 00:08:10.580 Number of Queues 00:08:10.580 ================ 00:08:10.580 Number of I/O Submission Queues: 64 00:08:10.580 Number of I/O Completion Queues: 64 00:08:10.580 00:08:10.580 ZNS Specific Controller Data 00:08:10.580 ============================ 00:08:10.580 Zone Append Size Limit: 0 00:08:10.580 00:08:10.580 00:08:10.580 Active Namespaces 00:08:10.580 ================= 00:08:10.580 Namespace ID:1 00:08:10.580 Error Recovery Timeout: Unlimited 00:08:10.580 Command Set Identifier: NVM (00h) 00:08:10.580 Deallocate: Supported 00:08:10.580 Deallocated/Unwritten Error: Supported 00:08:10.580 Deallocated Read Value: All 0x00 00:08:10.580 Deallocate in Write Zeroes: Not Supported 00:08:10.580 Deallocated Guard Field: 0xFFFF 00:08:10.580 Flush: Supported 00:08:10.580 Reservation: Not Supported 00:08:10.580 Namespace Sharing Capabilities: Private 00:08:10.580 Size (in LBAs): 1048576 (4GiB) 00:08:10.580 Capacity (in LBAs): 1048576 (4GiB) 00:08:10.580 Utilization (in LBAs): 1048576 (4GiB) 00:08:10.580 Thin Provisioning: Not Supported 00:08:10.580 Per-NS Atomic Units: No 00:08:10.580 Maximum Single Source Range Length: 128 00:08:10.580 Maximum Copy Length: 128 00:08:10.580 Maximum Source Range Count: 128 00:08:10.580 NGUID/EUI64 Never Reused: No 00:08:10.580 Namespace Write Protected: No 00:08:10.580 Number of LBA Formats: 8 00:08:10.580 Current LBA Format: LBA Format #04 00:08:10.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.580 00:08:10.580 NVM Specific Namespace Data 00:08:10.580 =========================== 00:08:10.580 Logical Block Storage Tag Mask: 0 00:08:10.580 Protection Information Capabilities: 00:08:10.580 16b Guard Protection Information Storage Tag Support: No 00:08:10.580 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.580 Storage Tag Check Read Support: No 00:08:10.580 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Namespace ID:2 00:08:10.580 Error Recovery Timeout: Unlimited 00:08:10.580 Command Set Identifier: NVM (00h) 00:08:10.580 Deallocate: Supported 00:08:10.580 Deallocated/Unwritten Error: Supported 00:08:10.580 Deallocated Read Value: All 0x00 00:08:10.580 Deallocate in Write Zeroes: Not Supported 00:08:10.580 Deallocated Guard Field: 0xFFFF 00:08:10.580 Flush: Supported 00:08:10.580 Reservation: Not Supported 00:08:10.580 Namespace Sharing Capabilities: Private 00:08:10.580 Size (in LBAs): 1048576 (4GiB) 00:08:10.580 Capacity (in LBAs): 1048576 (4GiB) 00:08:10.580 Utilization (in LBAs): 1048576 (4GiB) 00:08:10.580 Thin Provisioning: Not Supported 00:08:10.580 Per-NS Atomic Units: No 00:08:10.580 Maximum Single Source Range Length: 128 00:08:10.580 Maximum Copy Length: 128 00:08:10.580 Maximum Source Range Count: 128 00:08:10.580 NGUID/EUI64 Never Reused: No 00:08:10.580 Namespace Write Protected: No 00:08:10.580 Number of LBA Formats: 8 00:08:10.580 Current LBA Format: LBA Format #04 00:08:10.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.580 00:08:10.580 NVM Specific Namespace Data 00:08:10.580 =========================== 00:08:10.580 Logical Block Storage Tag Mask: 0 00:08:10.580 Protection Information Capabilities: 00:08:10.580 16b Guard Protection Information Storage Tag Support: No 00:08:10.580 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.580 Storage Tag Check Read Support: No 00:08:10.580 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Namespace ID:3 00:08:10.580 Error Recovery Timeout: Unlimited 00:08:10.580 Command Set Identifier: NVM (00h) 00:08:10.580 Deallocate: Supported 00:08:10.580 Deallocated/Unwritten Error: Supported 00:08:10.580 Deallocated Read Value: All 0x00 00:08:10.580 Deallocate in Write Zeroes: Not Supported 00:08:10.580 Deallocated Guard Field: 0xFFFF 00:08:10.580 Flush: Supported 00:08:10.580 Reservation: Not Supported 00:08:10.580 Namespace Sharing Capabilities: Private 00:08:10.580 Size (in LBAs): 1048576 (4GiB) 00:08:10.580 Capacity (in LBAs): 1048576 (4GiB) 00:08:10.580 Utilization (in LBAs): 1048576 (4GiB) 00:08:10.580 Thin Provisioning: Not Supported 00:08:10.580 Per-NS Atomic Units: No 00:08:10.580 Maximum Single Source Range Length: 128 00:08:10.580 Maximum Copy Length: 128 00:08:10.580 Maximum Source Range Count: 128 00:08:10.580 NGUID/EUI64 Never Reused: No 00:08:10.580 Namespace Write Protected: No 00:08:10.580 Number of LBA Formats: 8 00:08:10.580 Current LBA Format: LBA Format #04 00:08:10.580 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.580 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.580 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.580 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.580 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.580 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.580 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.580 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.580 00:08:10.580 NVM Specific Namespace Data 00:08:10.580 =========================== 00:08:10.580 Logical Block Storage Tag Mask: 0 00:08:10.580 Protection Information Capabilities: 00:08:10.580 16b Guard Protection Information Storage Tag Support: No 00:08:10.580 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.580 Storage Tag Check Read Support: No 00:08:10.580 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.580 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:10.580 11:24:09 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:10.839 ===================================================== 00:08:10.839 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:10.839 ===================================================== 00:08:10.839 Controller Capabilities/Features 00:08:10.839 ================================ 00:08:10.839 Vendor ID: 1b36 00:08:10.839 Subsystem Vendor ID: 1af4 00:08:10.839 Serial Number: 12343 00:08:10.839 Model Number: QEMU NVMe Ctrl 00:08:10.839 Firmware Version: 8.0.0 00:08:10.839 Recommended Arb Burst: 6 00:08:10.839 IEEE OUI Identifier: 00 54 52 00:08:10.839 Multi-path I/O 00:08:10.839 May have multiple subsystem ports: No 00:08:10.839 May have multiple controllers: Yes 00:08:10.839 Associated with SR-IOV VF: No 00:08:10.839 Max Data Transfer Size: 524288 00:08:10.839 Max Number of Namespaces: 256 00:08:10.839 Max Number of I/O Queues: 64 00:08:10.839 NVMe Specification Version (VS): 1.4 00:08:10.839 NVMe Specification Version (Identify): 1.4 00:08:10.839 Maximum Queue Entries: 2048 00:08:10.839 Contiguous Queues Required: Yes 00:08:10.839 Arbitration Mechanisms Supported 00:08:10.839 Weighted Round Robin: Not Supported 00:08:10.839 Vendor Specific: Not Supported 00:08:10.839 Reset Timeout: 7500 ms 00:08:10.839 Doorbell Stride: 4 bytes 00:08:10.839 NVM Subsystem Reset: Not Supported 00:08:10.839 Command Sets Supported 00:08:10.839 NVM Command Set: Supported 00:08:10.839 Boot Partition: Not Supported 00:08:10.839 Memory Page Size Minimum: 4096 bytes 00:08:10.839 Memory Page Size Maximum: 65536 bytes 00:08:10.839 Persistent Memory Region: Not Supported 00:08:10.839 Optional Asynchronous Events Supported 00:08:10.839 Namespace Attribute Notices: Supported 00:08:10.839 Firmware Activation Notices: Not Supported 00:08:10.839 ANA Change Notices: Not Supported 00:08:10.839 PLE Aggregate Log Change Notices: Not Supported 00:08:10.839 LBA Status Info Alert Notices: Not Supported 00:08:10.839 EGE Aggregate Log Change Notices: Not Supported 00:08:10.839 Normal NVM Subsystem Shutdown event: Not Supported 00:08:10.839 Zone Descriptor Change Notices: Not Supported 00:08:10.839 Discovery Log Change Notices: Not Supported 00:08:10.839 Controller Attributes 00:08:10.839 128-bit Host Identifier: Not Supported 00:08:10.839 Non-Operational Permissive Mode: Not Supported 00:08:10.839 NVM Sets: Not Supported 00:08:10.839 Read Recovery Levels: Not Supported 00:08:10.839 Endurance Groups: Supported 00:08:10.839 Predictable Latency Mode: Not Supported 00:08:10.839 Traffic Based Keep ALive: Not Supported 00:08:10.839 Namespace Granularity: Not Supported 00:08:10.839 SQ Associations: Not Supported 00:08:10.839 UUID List: Not Supported 00:08:10.839 Multi-Domain Subsystem: Not Supported 00:08:10.839 Fixed Capacity Management: Not Supported 00:08:10.839 Variable Capacity Management: Not Supported 00:08:10.839 Delete Endurance Group: Not Supported 00:08:10.839 Delete NVM Set: Not Supported 00:08:10.839 Extended LBA Formats Supported: Supported 00:08:10.839 Flexible Data Placement Supported: Supported 00:08:10.839 00:08:10.839 Controller Memory Buffer Support 00:08:10.839 ================================ 00:08:10.839 Supported: No 00:08:10.839 00:08:10.839 Persistent Memory Region Support 00:08:10.839 ================================ 00:08:10.839 Supported: No 00:08:10.839 00:08:10.839 Admin Command Set Attributes 00:08:10.839 ============================ 00:08:10.839 Security Send/Receive: Not Supported 00:08:10.839 Format NVM: Supported 00:08:10.839 Firmware Activate/Download: Not Supported 00:08:10.839 Namespace Management: Supported 00:08:10.839 Device Self-Test: Not Supported 00:08:10.839 Directives: Supported 00:08:10.839 NVMe-MI: Not Supported 00:08:10.839 Virtualization Management: Not Supported 00:08:10.839 Doorbell Buffer Config: Supported 00:08:10.839 Get LBA Status Capability: Not Supported 00:08:10.839 Command & Feature Lockdown Capability: Not Supported 00:08:10.839 Abort Command Limit: 4 00:08:10.839 Async Event Request Limit: 4 00:08:10.839 Number of Firmware Slots: N/A 00:08:10.839 Firmware Slot 1 Read-Only: N/A 00:08:10.839 Firmware Activation Without Reset: N/A 00:08:10.839 Multiple Update Detection Support: N/A 00:08:10.840 Firmware Update Granularity: No Information Provided 00:08:10.840 Per-Namespace SMART Log: Yes 00:08:10.840 Asymmetric Namespace Access Log Page: Not Supported 00:08:10.840 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:10.840 Command Effects Log Page: Supported 00:08:10.840 Get Log Page Extended Data: Supported 00:08:10.840 Telemetry Log Pages: Not Supported 00:08:10.840 Persistent Event Log Pages: Not Supported 00:08:10.840 Supported Log Pages Log Page: May Support 00:08:10.840 Commands Supported & Effects Log Page: Not Supported 00:08:10.840 Feature Identifiers & Effects Log Page:May Support 00:08:10.840 NVMe-MI Commands & Effects Log Page: May Support 00:08:10.840 Data Area 4 for Telemetry Log: Not Supported 00:08:10.840 Error Log Page Entries Supported: 1 00:08:10.840 Keep Alive: Not Supported 00:08:10.840 00:08:10.840 NVM Command Set Attributes 00:08:10.840 ========================== 00:08:10.840 Submission Queue Entry Size 00:08:10.840 Max: 64 00:08:10.840 Min: 64 00:08:10.840 Completion Queue Entry Size 00:08:10.840 Max: 16 00:08:10.840 Min: 16 00:08:10.840 Number of Namespaces: 256 00:08:10.840 Compare Command: Supported 00:08:10.840 Write Uncorrectable Command: Not Supported 00:08:10.840 Dataset Management Command: Supported 00:08:10.840 Write Zeroes Command: Supported 00:08:10.840 Set Features Save Field: Supported 00:08:10.840 Reservations: Not Supported 00:08:10.840 Timestamp: Supported 00:08:10.840 Copy: Supported 00:08:10.840 Volatile Write Cache: Present 00:08:10.840 Atomic Write Unit (Normal): 1 00:08:10.840 Atomic Write Unit (PFail): 1 00:08:10.840 Atomic Compare & Write Unit: 1 00:08:10.840 Fused Compare & Write: Not Supported 00:08:10.840 Scatter-Gather List 00:08:10.840 SGL Command Set: Supported 00:08:10.840 SGL Keyed: Not Supported 00:08:10.840 SGL Bit Bucket Descriptor: Not Supported 00:08:10.840 SGL Metadata Pointer: Not Supported 00:08:10.840 Oversized SGL: Not Supported 00:08:10.840 SGL Metadata Address: Not Supported 00:08:10.840 SGL Offset: Not Supported 00:08:10.840 Transport SGL Data Block: Not Supported 00:08:10.840 Replay Protected Memory Block: Not Supported 00:08:10.840 00:08:10.840 Firmware Slot Information 00:08:10.840 ========================= 00:08:10.840 Active slot: 1 00:08:10.840 Slot 1 Firmware Revision: 1.0 00:08:10.840 00:08:10.840 00:08:10.840 Commands Supported and Effects 00:08:10.840 ============================== 00:08:10.840 Admin Commands 00:08:10.840 -------------- 00:08:10.840 Delete I/O Submission Queue (00h): Supported 00:08:10.840 Create I/O Submission Queue (01h): Supported 00:08:10.840 Get Log Page (02h): Supported 00:08:10.840 Delete I/O Completion Queue (04h): Supported 00:08:10.840 Create I/O Completion Queue (05h): Supported 00:08:10.840 Identify (06h): Supported 00:08:10.840 Abort (08h): Supported 00:08:10.840 Set Features (09h): Supported 00:08:10.840 Get Features (0Ah): Supported 00:08:10.840 Asynchronous Event Request (0Ch): Supported 00:08:10.840 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:10.840 Directive Send (19h): Supported 00:08:10.840 Directive Receive (1Ah): Supported 00:08:10.840 Virtualization Management (1Ch): Supported 00:08:10.840 Doorbell Buffer Config (7Ch): Supported 00:08:10.840 Format NVM (80h): Supported LBA-Change 00:08:10.840 I/O Commands 00:08:10.840 ------------ 00:08:10.840 Flush (00h): Supported LBA-Change 00:08:10.840 Write (01h): Supported LBA-Change 00:08:10.840 Read (02h): Supported 00:08:10.840 Compare (05h): Supported 00:08:10.840 Write Zeroes (08h): Supported LBA-Change 00:08:10.840 Dataset Management (09h): Supported LBA-Change 00:08:10.840 Unknown (0Ch): Supported 00:08:10.840 Unknown (12h): Supported 00:08:10.840 Copy (19h): Supported LBA-Change 00:08:10.840 Unknown (1Dh): Supported LBA-Change 00:08:10.840 00:08:10.840 Error Log 00:08:10.840 ========= 00:08:10.840 00:08:10.840 Arbitration 00:08:10.840 =========== 00:08:10.840 Arbitration Burst: no limit 00:08:10.840 00:08:10.840 Power Management 00:08:10.840 ================ 00:08:10.840 Number of Power States: 1 00:08:10.840 Current Power State: Power State #0 00:08:10.840 Power State #0: 00:08:10.840 Max Power: 25.00 W 00:08:10.840 Non-Operational State: Operational 00:08:10.840 Entry Latency: 16 microseconds 00:08:10.840 Exit Latency: 4 microseconds 00:08:10.840 Relative Read Throughput: 0 00:08:10.840 Relative Read Latency: 0 00:08:10.840 Relative Write Throughput: 0 00:08:10.840 Relative Write Latency: 0 00:08:10.840 Idle Power: Not Reported 00:08:10.840 Active Power: Not Reported 00:08:10.840 Non-Operational Permissive Mode: Not Supported 00:08:10.840 00:08:10.840 Health Information 00:08:10.840 ================== 00:08:10.840 Critical Warnings: 00:08:10.840 Available Spare Space: OK 00:08:10.840 Temperature: OK 00:08:10.840 Device Reliability: OK 00:08:10.840 Read Only: No 00:08:10.840 Volatile Memory Backup: OK 00:08:10.840 Current Temperature: 323 Kelvin (50 Celsius) 00:08:10.840 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:10.840 Available Spare: 0% 00:08:10.840 Available Spare Threshold: 0% 00:08:10.840 Life Percentage Used: 0% 00:08:10.840 Data Units Read: 875 00:08:10.840 Data Units Written: 804 00:08:10.840 Host Read Commands: 40576 00:08:10.840 Host Write Commands: 39999 00:08:10.840 Controller Busy Time: 0 minutes 00:08:10.840 Power Cycles: 0 00:08:10.840 Power On Hours: 0 hours 00:08:10.840 Unsafe Shutdowns: 0 00:08:10.840 Unrecoverable Media Errors: 0 00:08:10.840 Lifetime Error Log Entries: 0 00:08:10.840 Warning Temperature Time: 0 minutes 00:08:10.840 Critical Temperature Time: 0 minutes 00:08:10.840 00:08:10.840 Number of Queues 00:08:10.840 ================ 00:08:10.840 Number of I/O Submission Queues: 64 00:08:10.840 Number of I/O Completion Queues: 64 00:08:10.840 00:08:10.840 ZNS Specific Controller Data 00:08:10.840 ============================ 00:08:10.840 Zone Append Size Limit: 0 00:08:10.840 00:08:10.840 00:08:10.840 Active Namespaces 00:08:10.840 ================= 00:08:10.840 Namespace ID:1 00:08:10.840 Error Recovery Timeout: Unlimited 00:08:10.840 Command Set Identifier: NVM (00h) 00:08:10.840 Deallocate: Supported 00:08:10.840 Deallocated/Unwritten Error: Supported 00:08:10.840 Deallocated Read Value: All 0x00 00:08:10.840 Deallocate in Write Zeroes: Not Supported 00:08:10.840 Deallocated Guard Field: 0xFFFF 00:08:10.840 Flush: Supported 00:08:10.840 Reservation: Not Supported 00:08:10.840 Namespace Sharing Capabilities: Multiple Controllers 00:08:10.840 Size (in LBAs): 262144 (1GiB) 00:08:10.840 Capacity (in LBAs): 262144 (1GiB) 00:08:10.840 Utilization (in LBAs): 262144 (1GiB) 00:08:10.840 Thin Provisioning: Not Supported 00:08:10.840 Per-NS Atomic Units: No 00:08:10.840 Maximum Single Source Range Length: 128 00:08:10.840 Maximum Copy Length: 128 00:08:10.840 Maximum Source Range Count: 128 00:08:10.840 NGUID/EUI64 Never Reused: No 00:08:10.840 Namespace Write Protected: No 00:08:10.840 Endurance group ID: 1 00:08:10.840 Number of LBA Formats: 8 00:08:10.840 Current LBA Format: LBA Format #04 00:08:10.840 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:10.840 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:10.840 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:10.840 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:10.840 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:10.840 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:10.840 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:10.840 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:10.840 00:08:10.840 Get Feature FDP: 00:08:10.840 ================ 00:08:10.840 Enabled: Yes 00:08:10.840 FDP configuration index: 0 00:08:10.840 00:08:10.840 FDP configurations log page 00:08:10.840 =========================== 00:08:10.840 Number of FDP configurations: 1 00:08:10.840 Version: 0 00:08:10.840 Size: 112 00:08:10.840 FDP Configuration Descriptor: 0 00:08:10.840 Descriptor Size: 96 00:08:10.840 Reclaim Group Identifier format: 2 00:08:10.840 FDP Volatile Write Cache: Not Present 00:08:10.840 FDP Configuration: Valid 00:08:10.841 Vendor Specific Size: 0 00:08:10.841 Number of Reclaim Groups: 2 00:08:10.841 Number of Recalim Unit Handles: 8 00:08:10.841 Max Placement Identifiers: 128 00:08:10.841 Number of Namespaces Suppprted: 256 00:08:10.841 Reclaim unit Nominal Size: 6000000 bytes 00:08:10.841 Estimated Reclaim Unit Time Limit: Not Reported 00:08:10.841 RUH Desc #000: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #001: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #002: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #003: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #004: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #005: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #006: RUH Type: Initially Isolated 00:08:10.841 RUH Desc #007: RUH Type: Initially Isolated 00:08:10.841 00:08:10.841 FDP reclaim unit handle usage log page 00:08:10.841 ====================================== 00:08:10.841 Number of Reclaim Unit Handles: 8 00:08:10.841 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:10.841 RUH Usage Desc #001: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #002: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #003: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #004: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #005: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #006: RUH Attributes: Unused 00:08:10.841 RUH Usage Desc #007: RUH Attributes: Unused 00:08:10.841 00:08:10.841 FDP statistics log page 00:08:10.841 ======================= 00:08:10.841 Host bytes with metadata written: 517513216 00:08:10.841 Media bytes with metadata written: 517570560 00:08:10.841 Media bytes erased: 0 00:08:10.841 00:08:10.841 FDP events log page 00:08:10.841 =================== 00:08:10.841 Number of FDP events: 0 00:08:10.841 00:08:10.841 NVM Specific Namespace Data 00:08:10.841 =========================== 00:08:10.841 Logical Block Storage Tag Mask: 0 00:08:10.841 Protection Information Capabilities: 00:08:10.841 16b Guard Protection Information Storage Tag Support: No 00:08:10.841 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:10.841 Storage Tag Check Read Support: No 00:08:10.841 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:10.841 00:08:10.841 real 0m1.245s 00:08:10.841 user 0m0.445s 00:08:10.841 sys 0m0.557s 00:08:10.841 11:24:09 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:10.841 11:24:09 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:10.841 ************************************ 00:08:10.841 END TEST nvme_identify 00:08:10.841 ************************************ 00:08:10.841 11:24:10 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:10.841 11:24:10 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:10.841 11:24:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:10.841 11:24:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.841 ************************************ 00:08:10.841 START TEST nvme_perf 00:08:10.841 ************************************ 00:08:10.841 11:24:10 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:08:10.841 11:24:10 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:12.215 Initializing NVMe Controllers 00:08:12.215 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:12.215 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:12.215 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:12.215 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:12.215 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:12.215 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:12.215 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:12.215 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:12.215 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:12.215 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:12.215 Initialization complete. Launching workers. 00:08:12.215 ======================================================== 00:08:12.215 Latency(us) 00:08:12.215 Device Information : IOPS MiB/s Average min max 00:08:12.215 PCIE (0000:00:13.0) NSID 1 from core 0: 8509.33 99.72 15068.20 10766.92 38756.09 00:08:12.215 PCIE (0000:00:10.0) NSID 1 from core 0: 8509.33 99.72 15045.94 9949.78 37388.92 00:08:12.215 PCIE (0000:00:11.0) NSID 1 from core 0: 8509.33 99.72 15024.59 10169.96 35891.02 00:08:12.215 PCIE (0000:00:12.0) NSID 1 from core 0: 8509.33 99.72 15002.06 9360.75 35557.03 00:08:12.215 PCIE (0000:00:12.0) NSID 2 from core 0: 8509.33 99.72 14979.16 9255.19 33751.35 00:08:12.215 PCIE (0000:00:12.0) NSID 3 from core 0: 8573.31 100.47 14845.02 8995.41 26459.02 00:08:12.215 ======================================================== 00:08:12.215 Total : 51119.95 599.06 14993.98 8995.41 38756.09 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11443.594us 00:08:12.215 10.00000% : 12552.665us 00:08:12.215 25.00000% : 13308.849us 00:08:12.215 50.00000% : 14821.218us 00:08:12.215 75.00000% : 16232.763us 00:08:12.215 90.00000% : 17543.483us 00:08:12.215 95.00000% : 18350.080us 00:08:12.215 98.00000% : 19559.975us 00:08:12.215 99.00000% : 30045.735us 00:08:12.215 99.50000% : 37910.055us 00:08:12.215 99.90000% : 38716.652us 00:08:12.215 99.99000% : 38918.302us 00:08:12.215 99.99900% : 38918.302us 00:08:12.215 99.99990% : 38918.302us 00:08:12.215 99.99999% : 38918.302us 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11342.769us 00:08:12.215 10.00000% : 12552.665us 00:08:12.215 25.00000% : 13409.674us 00:08:12.215 50.00000% : 14720.394us 00:08:12.215 75.00000% : 16333.588us 00:08:12.215 90.00000% : 17543.483us 00:08:12.215 95.00000% : 18249.255us 00:08:12.215 98.00000% : 19459.151us 00:08:12.215 99.00000% : 29037.489us 00:08:12.215 99.50000% : 36498.511us 00:08:12.215 99.90000% : 37305.108us 00:08:12.215 99.99000% : 37506.757us 00:08:12.215 99.99900% : 37506.757us 00:08:12.215 99.99990% : 37506.757us 00:08:12.215 99.99999% : 37506.757us 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11342.769us 00:08:12.215 10.00000% : 12502.252us 00:08:12.215 25.00000% : 13510.498us 00:08:12.215 50.00000% : 14619.569us 00:08:12.215 75.00000% : 16232.763us 00:08:12.215 90.00000% : 17543.483us 00:08:12.215 95.00000% : 18350.080us 00:08:12.215 98.00000% : 19660.800us 00:08:12.215 99.00000% : 28029.243us 00:08:12.215 99.50000% : 34885.317us 00:08:12.215 99.90000% : 35893.563us 00:08:12.215 99.99000% : 35893.563us 00:08:12.215 99.99900% : 35893.563us 00:08:12.215 99.99990% : 35893.563us 00:08:12.215 99.99999% : 35893.563us 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11141.120us 00:08:12.215 10.00000% : 12552.665us 00:08:12.215 25.00000% : 13409.674us 00:08:12.215 50.00000% : 14720.394us 00:08:12.215 75.00000% : 16131.938us 00:08:12.215 90.00000% : 17543.483us 00:08:12.215 95.00000% : 18148.431us 00:08:12.215 98.00000% : 19156.677us 00:08:12.215 99.00000% : 27424.295us 00:08:12.215 99.50000% : 34683.668us 00:08:12.215 99.90000% : 35490.265us 00:08:12.215 99.99000% : 35691.914us 00:08:12.215 99.99900% : 35691.914us 00:08:12.215 99.99990% : 35691.914us 00:08:12.215 99.99999% : 35691.914us 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11443.594us 00:08:12.215 10.00000% : 12451.840us 00:08:12.215 25.00000% : 13409.674us 00:08:12.215 50.00000% : 14720.394us 00:08:12.215 75.00000% : 16232.763us 00:08:12.215 90.00000% : 17543.483us 00:08:12.215 95.00000% : 18249.255us 00:08:12.215 98.00000% : 19156.677us 00:08:12.215 99.00000% : 26214.400us 00:08:12.215 99.50000% : 32868.825us 00:08:12.215 99.90000% : 33675.422us 00:08:12.215 99.99000% : 33877.071us 00:08:12.215 99.99900% : 33877.071us 00:08:12.215 99.99990% : 33877.071us 00:08:12.215 99.99999% : 33877.071us 00:08:12.215 00:08:12.215 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:12.215 ================================================================================= 00:08:12.215 1.00000% : 11393.182us 00:08:12.215 10.00000% : 12451.840us 00:08:12.215 25.00000% : 13308.849us 00:08:12.215 50.00000% : 14720.394us 00:08:12.215 75.00000% : 16232.763us 00:08:12.215 90.00000% : 17442.658us 00:08:12.215 95.00000% : 18249.255us 00:08:12.215 98.00000% : 19055.852us 00:08:12.215 99.00000% : 19660.800us 00:08:12.215 99.50000% : 25508.628us 00:08:12.215 99.90000% : 26416.049us 00:08:12.215 99.99000% : 26617.698us 00:08:12.215 99.99900% : 26617.698us 00:08:12.215 99.99990% : 26617.698us 00:08:12.215 99.99999% : 26617.698us 00:08:12.215 00:08:12.215 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:12.215 ============================================================================== 00:08:12.215 Range in us Cumulative IO count 00:08:12.215 10737.822 - 10788.234: 0.0235% ( 2) 00:08:12.215 10788.234 - 10838.646: 0.0470% ( 2) 00:08:12.215 10838.646 - 10889.058: 0.0822% ( 3) 00:08:12.215 10889.058 - 10939.471: 0.1175% ( 3) 00:08:12.215 10939.471 - 10989.883: 0.1527% ( 3) 00:08:12.215 10989.883 - 11040.295: 0.1997% ( 4) 00:08:12.215 11040.295 - 11090.708: 0.2585% ( 5) 00:08:12.215 11090.708 - 11141.120: 0.4229% ( 14) 00:08:12.215 11141.120 - 11191.532: 0.5404% ( 10) 00:08:12.215 11191.532 - 11241.945: 0.6461% ( 9) 00:08:12.215 11241.945 - 11292.357: 0.6931% ( 4) 00:08:12.215 11292.357 - 11342.769: 0.7871% ( 8) 00:08:12.215 11342.769 - 11393.182: 0.9281% ( 12) 00:08:12.215 11393.182 - 11443.594: 1.0221% ( 8) 00:08:12.215 11443.594 - 11494.006: 1.1278% ( 9) 00:08:12.215 11494.006 - 11544.418: 1.2336% ( 9) 00:08:12.215 11544.418 - 11594.831: 1.3980% ( 14) 00:08:12.215 11594.831 - 11645.243: 1.5625% ( 14) 00:08:12.215 11645.243 - 11695.655: 1.7387% ( 15) 00:08:12.215 11695.655 - 11746.068: 1.8914% ( 13) 00:08:12.215 11746.068 - 11796.480: 2.1147% ( 19) 00:08:12.215 11796.480 - 11846.892: 2.3496% ( 20) 00:08:12.215 11846.892 - 11897.305: 2.6668% ( 27) 00:08:12.215 11897.305 - 11947.717: 2.9840% ( 27) 00:08:12.215 11947.717 - 11998.129: 3.3365% ( 30) 00:08:12.215 11998.129 - 12048.542: 3.7242% ( 33) 00:08:12.215 12048.542 - 12098.954: 4.2646% ( 46) 00:08:12.215 12098.954 - 12149.366: 4.7462% ( 41) 00:08:12.215 12149.366 - 12199.778: 5.2632% ( 44) 00:08:12.215 12199.778 - 12250.191: 5.8858% ( 53) 00:08:12.215 12250.191 - 12300.603: 6.5320% ( 55) 00:08:12.215 12300.603 - 12351.015: 7.2721% ( 63) 00:08:12.215 12351.015 - 12401.428: 8.1297% ( 73) 00:08:12.215 12401.428 - 12451.840: 9.0343% ( 77) 00:08:12.215 12451.840 - 12502.252: 9.9154% ( 75) 00:08:12.215 12502.252 - 12552.665: 10.8670% ( 81) 00:08:12.215 12552.665 - 12603.077: 11.8656% ( 85) 00:08:12.215 12603.077 - 12653.489: 12.8877% ( 87) 00:08:12.216 12653.489 - 12703.902: 13.9333% ( 89) 00:08:12.216 12703.902 - 12754.314: 14.9906% ( 90) 00:08:12.216 12754.314 - 12804.726: 16.0362% ( 89) 00:08:12.216 12804.726 - 12855.138: 17.1875% ( 98) 00:08:12.216 12855.138 - 12905.551: 18.2801% ( 93) 00:08:12.216 12905.551 - 13006.375: 20.3477% ( 176) 00:08:12.216 13006.375 - 13107.200: 22.4037% ( 175) 00:08:12.216 13107.200 - 13208.025: 24.5301% ( 181) 00:08:12.216 13208.025 - 13308.849: 26.5038% ( 168) 00:08:12.216 13308.849 - 13409.674: 28.3130% ( 154) 00:08:12.216 13409.674 - 13510.498: 30.0869% ( 151) 00:08:12.216 13510.498 - 13611.323: 31.5437% ( 124) 00:08:12.216 13611.323 - 13712.148: 32.9770% ( 122) 00:08:12.216 13712.148 - 13812.972: 34.5747% ( 136) 00:08:12.216 13812.972 - 13913.797: 36.0902% ( 129) 00:08:12.216 13913.797 - 14014.622: 37.7115% ( 138) 00:08:12.216 14014.622 - 14115.446: 39.1682% ( 124) 00:08:12.216 14115.446 - 14216.271: 40.6955% ( 130) 00:08:12.216 14216.271 - 14317.095: 42.1288% ( 122) 00:08:12.216 14317.095 - 14417.920: 43.5973% ( 125) 00:08:12.216 14417.920 - 14518.745: 45.1010% ( 128) 00:08:12.216 14518.745 - 14619.569: 46.9690% ( 159) 00:08:12.216 14619.569 - 14720.394: 49.0014% ( 173) 00:08:12.216 14720.394 - 14821.218: 50.8811% ( 160) 00:08:12.216 14821.218 - 14922.043: 52.9253% ( 174) 00:08:12.216 14922.043 - 15022.868: 55.1574% ( 190) 00:08:12.216 15022.868 - 15123.692: 57.8360% ( 228) 00:08:12.216 15123.692 - 15224.517: 60.1269% ( 195) 00:08:12.216 15224.517 - 15325.342: 62.2415% ( 180) 00:08:12.216 15325.342 - 15426.166: 64.2035% ( 167) 00:08:12.216 15426.166 - 15526.991: 66.1067% ( 162) 00:08:12.216 15526.991 - 15627.815: 67.9041% ( 153) 00:08:12.216 15627.815 - 15728.640: 69.5136% ( 137) 00:08:12.216 15728.640 - 15829.465: 70.9586% ( 123) 00:08:12.216 15829.465 - 15930.289: 72.2979% ( 114) 00:08:12.216 15930.289 - 16031.114: 73.5197% ( 104) 00:08:12.216 16031.114 - 16131.938: 74.5653% ( 89) 00:08:12.216 16131.938 - 16232.763: 75.6579% ( 93) 00:08:12.216 16232.763 - 16333.588: 76.6095% ( 81) 00:08:12.216 16333.588 - 16434.412: 77.6081% ( 85) 00:08:12.216 16434.412 - 16535.237: 78.7711% ( 99) 00:08:12.216 16535.237 - 16636.062: 80.0399% ( 108) 00:08:12.216 16636.062 - 16736.886: 81.4850% ( 123) 00:08:12.216 16736.886 - 16837.711: 82.6010% ( 95) 00:08:12.216 16837.711 - 16938.535: 83.7289% ( 96) 00:08:12.216 16938.535 - 17039.360: 85.1269% ( 119) 00:08:12.216 17039.360 - 17140.185: 86.4544% ( 113) 00:08:12.216 17140.185 - 17241.009: 87.7115% ( 107) 00:08:12.216 17241.009 - 17341.834: 88.8158% ( 94) 00:08:12.216 17341.834 - 17442.658: 89.8614% ( 89) 00:08:12.216 17442.658 - 17543.483: 90.9187% ( 90) 00:08:12.216 17543.483 - 17644.308: 91.7528% ( 71) 00:08:12.216 17644.308 - 17745.132: 92.3755% ( 53) 00:08:12.216 17745.132 - 17845.957: 93.1039% ( 62) 00:08:12.216 17845.957 - 17946.782: 93.7030% ( 51) 00:08:12.216 17946.782 - 18047.606: 94.1964% ( 42) 00:08:12.216 18047.606 - 18148.431: 94.6076% ( 35) 00:08:12.216 18148.431 - 18249.255: 94.9836% ( 32) 00:08:12.216 18249.255 - 18350.080: 95.4417% ( 39) 00:08:12.216 18350.080 - 18450.905: 95.8059% ( 31) 00:08:12.216 18450.905 - 18551.729: 96.0409% ( 20) 00:08:12.216 18551.729 - 18652.554: 96.2171% ( 15) 00:08:12.216 18652.554 - 18753.378: 96.3698% ( 13) 00:08:12.216 18753.378 - 18854.203: 96.5226% ( 13) 00:08:12.216 18854.203 - 18955.028: 96.6870% ( 14) 00:08:12.216 18955.028 - 19055.852: 96.9690% ( 24) 00:08:12.216 19055.852 - 19156.677: 97.2274% ( 22) 00:08:12.216 19156.677 - 19257.502: 97.4624% ( 20) 00:08:12.216 19257.502 - 19358.326: 97.6621% ( 17) 00:08:12.216 19358.326 - 19459.151: 97.8501% ( 16) 00:08:12.216 19459.151 - 19559.975: 98.0381% ( 16) 00:08:12.216 19559.975 - 19660.800: 98.1673% ( 11) 00:08:12.216 19660.800 - 19761.625: 98.2730% ( 9) 00:08:12.216 19761.625 - 19862.449: 98.3788% ( 9) 00:08:12.216 19862.449 - 19963.274: 98.4610% ( 7) 00:08:12.216 19963.274 - 20064.098: 98.4962% ( 3) 00:08:12.216 28835.840 - 29037.489: 98.5432% ( 4) 00:08:12.216 29037.489 - 29239.138: 98.6372% ( 8) 00:08:12.216 29239.138 - 29440.788: 98.7547% ( 10) 00:08:12.216 29440.788 - 29642.437: 98.8487% ( 8) 00:08:12.216 29642.437 - 29844.086: 98.9544% ( 9) 00:08:12.216 29844.086 - 30045.735: 99.0602% ( 9) 00:08:12.216 30045.735 - 30247.385: 99.1424% ( 7) 00:08:12.216 30247.385 - 30449.034: 99.2481% ( 9) 00:08:12.216 37103.458 - 37305.108: 99.2834% ( 3) 00:08:12.216 37305.108 - 37506.757: 99.3891% ( 9) 00:08:12.216 37506.757 - 37708.406: 99.4831% ( 8) 00:08:12.216 37708.406 - 37910.055: 99.5888% ( 9) 00:08:12.216 37910.055 - 38111.705: 99.6945% ( 9) 00:08:12.216 38111.705 - 38313.354: 99.7885% ( 8) 00:08:12.216 38313.354 - 38515.003: 99.8825% ( 8) 00:08:12.216 38515.003 - 38716.652: 99.9883% ( 9) 00:08:12.216 38716.652 - 38918.302: 100.0000% ( 1) 00:08:12.216 00:08:12.216 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:12.216 ============================================================================== 00:08:12.216 Range in us Cumulative IO count 00:08:12.216 9931.225 - 9981.637: 0.0235% ( 2) 00:08:12.216 9981.637 - 10032.049: 0.0470% ( 2) 00:08:12.216 10032.049 - 10082.462: 0.0822% ( 3) 00:08:12.216 10082.462 - 10132.874: 0.1057% ( 2) 00:08:12.216 10132.874 - 10183.286: 0.1292% ( 2) 00:08:12.216 10183.286 - 10233.698: 0.1527% ( 2) 00:08:12.216 10284.111 - 10334.523: 0.2232% ( 6) 00:08:12.216 10334.523 - 10384.935: 0.2350% ( 1) 00:08:12.216 10384.935 - 10435.348: 0.2585% ( 2) 00:08:12.216 10435.348 - 10485.760: 0.2702% ( 1) 00:08:12.216 10485.760 - 10536.172: 0.3055% ( 3) 00:08:12.216 10536.172 - 10586.585: 0.3407% ( 3) 00:08:12.216 10586.585 - 10636.997: 0.3524% ( 1) 00:08:12.216 10636.997 - 10687.409: 0.3877% ( 3) 00:08:12.216 10687.409 - 10737.822: 0.4112% ( 2) 00:08:12.216 10737.822 - 10788.234: 0.4464% ( 3) 00:08:12.216 10788.234 - 10838.646: 0.4582% ( 1) 00:08:12.216 10838.646 - 10889.058: 0.5052% ( 4) 00:08:12.216 10889.058 - 10939.471: 0.5287% ( 2) 00:08:12.216 10939.471 - 10989.883: 0.5639% ( 3) 00:08:12.216 10989.883 - 11040.295: 0.5874% ( 2) 00:08:12.216 11040.295 - 11090.708: 0.6579% ( 6) 00:08:12.216 11090.708 - 11141.120: 0.6931% ( 3) 00:08:12.216 11141.120 - 11191.532: 0.7519% ( 5) 00:08:12.216 11191.532 - 11241.945: 0.8459% ( 8) 00:08:12.216 11241.945 - 11292.357: 0.9281% ( 7) 00:08:12.216 11292.357 - 11342.769: 1.0573% ( 11) 00:08:12.216 11342.769 - 11393.182: 1.0691% ( 1) 00:08:12.216 11393.182 - 11443.594: 1.1513% ( 7) 00:08:12.216 11443.594 - 11494.006: 1.2336% ( 7) 00:08:12.216 11494.006 - 11544.418: 1.3275% ( 8) 00:08:12.216 11544.418 - 11594.831: 1.5038% ( 15) 00:08:12.216 11594.831 - 11645.243: 1.6565% ( 13) 00:08:12.216 11645.243 - 11695.655: 1.8680% ( 18) 00:08:12.216 11695.655 - 11746.068: 2.0677% ( 17) 00:08:12.216 11746.068 - 11796.480: 2.2321% ( 14) 00:08:12.216 11796.480 - 11846.892: 2.5258% ( 25) 00:08:12.216 11846.892 - 11897.305: 2.8783% ( 30) 00:08:12.216 11897.305 - 11947.717: 3.1720% ( 25) 00:08:12.216 11947.717 - 11998.129: 3.5832% ( 35) 00:08:12.216 11998.129 - 12048.542: 4.0179% ( 37) 00:08:12.216 12048.542 - 12098.954: 4.4878% ( 40) 00:08:12.216 12098.954 - 12149.366: 4.9930% ( 43) 00:08:12.216 12149.366 - 12199.778: 5.6156% ( 53) 00:08:12.216 12199.778 - 12250.191: 6.2970% ( 58) 00:08:12.216 12250.191 - 12300.603: 6.9666% ( 57) 00:08:12.216 12300.603 - 12351.015: 7.6950% ( 62) 00:08:12.216 12351.015 - 12401.428: 8.4939% ( 68) 00:08:12.216 12401.428 - 12451.840: 9.2223% ( 62) 00:08:12.216 12451.840 - 12502.252: 9.9742% ( 64) 00:08:12.216 12502.252 - 12552.665: 10.7260% ( 64) 00:08:12.216 12552.665 - 12603.077: 11.3722% ( 55) 00:08:12.216 12603.077 - 12653.489: 12.1476% ( 66) 00:08:12.216 12653.489 - 12703.902: 13.0639% ( 78) 00:08:12.216 12703.902 - 12754.314: 13.9685% ( 77) 00:08:12.216 12754.314 - 12804.726: 14.8261% ( 73) 00:08:12.216 12804.726 - 12855.138: 15.6837% ( 73) 00:08:12.216 12855.138 - 12905.551: 16.7646% ( 92) 00:08:12.216 12905.551 - 13006.375: 18.8087% ( 174) 00:08:12.216 13006.375 - 13107.200: 20.5945% ( 152) 00:08:12.216 13107.200 - 13208.025: 22.3097% ( 146) 00:08:12.216 13208.025 - 13308.849: 24.2834% ( 168) 00:08:12.216 13308.849 - 13409.674: 26.3863% ( 179) 00:08:12.216 13409.674 - 13510.498: 28.3600% ( 168) 00:08:12.216 13510.498 - 13611.323: 30.1927% ( 156) 00:08:12.216 13611.323 - 13712.148: 32.1076% ( 163) 00:08:12.216 13712.148 - 13812.972: 34.2105% ( 179) 00:08:12.216 13812.972 - 13913.797: 36.1137% ( 162) 00:08:12.216 13913.797 - 14014.622: 38.3459% ( 190) 00:08:12.216 14014.622 - 14115.446: 40.1551% ( 154) 00:08:12.216 14115.446 - 14216.271: 41.9995% ( 157) 00:08:12.216 14216.271 - 14317.095: 44.0085% ( 171) 00:08:12.216 14317.095 - 14417.920: 45.7237% ( 146) 00:08:12.216 14417.920 - 14518.745: 47.6034% ( 160) 00:08:12.216 14518.745 - 14619.569: 49.3656% ( 150) 00:08:12.216 14619.569 - 14720.394: 51.5155% ( 183) 00:08:12.216 14720.394 - 14821.218: 53.2777% ( 150) 00:08:12.216 14821.218 - 14922.043: 54.9930% ( 146) 00:08:12.216 14922.043 - 15022.868: 56.9196% ( 164) 00:08:12.216 15022.868 - 15123.692: 58.8228% ( 162) 00:08:12.216 15123.692 - 15224.517: 60.8905% ( 176) 00:08:12.216 15224.517 - 15325.342: 62.5352% ( 140) 00:08:12.216 15325.342 - 15426.166: 64.2505% ( 146) 00:08:12.216 15426.166 - 15526.991: 65.9422% ( 144) 00:08:12.216 15526.991 - 15627.815: 67.3872% ( 123) 00:08:12.216 15627.815 - 15728.640: 69.0555% ( 142) 00:08:12.216 15728.640 - 15829.465: 70.4300% ( 117) 00:08:12.216 15829.465 - 15930.289: 71.4403% ( 86) 00:08:12.217 15930.289 - 16031.114: 72.6856% ( 106) 00:08:12.217 16031.114 - 16131.938: 73.7664% ( 92) 00:08:12.217 16131.938 - 16232.763: 74.9530% ( 101) 00:08:12.217 16232.763 - 16333.588: 76.0926% ( 97) 00:08:12.217 16333.588 - 16434.412: 77.4201% ( 113) 00:08:12.217 16434.412 - 16535.237: 78.4539% ( 88) 00:08:12.217 16535.237 - 16636.062: 79.6170% ( 99) 00:08:12.217 16636.062 - 16736.886: 80.8623% ( 106) 00:08:12.217 16736.886 - 16837.711: 81.8844% ( 87) 00:08:12.217 16837.711 - 16938.535: 83.1062% ( 104) 00:08:12.217 16938.535 - 17039.360: 84.5277% ( 121) 00:08:12.217 17039.360 - 17140.185: 85.7260% ( 102) 00:08:12.217 17140.185 - 17241.009: 86.6659% ( 80) 00:08:12.217 17241.009 - 17341.834: 88.0874% ( 121) 00:08:12.217 17341.834 - 17442.658: 89.0508% ( 82) 00:08:12.217 17442.658 - 17543.483: 90.5310% ( 126) 00:08:12.217 17543.483 - 17644.308: 91.5179% ( 84) 00:08:12.217 17644.308 - 17745.132: 92.4107% ( 76) 00:08:12.217 17745.132 - 17845.957: 93.3741% ( 82) 00:08:12.217 17845.957 - 17946.782: 93.9615% ( 50) 00:08:12.217 17946.782 - 18047.606: 94.5254% ( 48) 00:08:12.217 18047.606 - 18148.431: 94.9836% ( 39) 00:08:12.217 18148.431 - 18249.255: 95.3947% ( 35) 00:08:12.217 18249.255 - 18350.080: 95.7119% ( 27) 00:08:12.217 18350.080 - 18450.905: 95.9821% ( 23) 00:08:12.217 18450.905 - 18551.729: 96.3346% ( 30) 00:08:12.217 18551.729 - 18652.554: 96.5343% ( 17) 00:08:12.217 18652.554 - 18753.378: 96.7223% ( 16) 00:08:12.217 18753.378 - 18854.203: 96.9925% ( 23) 00:08:12.217 18854.203 - 18955.028: 97.2274% ( 20) 00:08:12.217 18955.028 - 19055.852: 97.4037% ( 15) 00:08:12.217 19055.852 - 19156.677: 97.6151% ( 18) 00:08:12.217 19156.677 - 19257.502: 97.7444% ( 11) 00:08:12.217 19257.502 - 19358.326: 97.8383% ( 8) 00:08:12.217 19358.326 - 19459.151: 98.0263% ( 16) 00:08:12.217 19459.151 - 19559.975: 98.1086% ( 7) 00:08:12.217 19559.975 - 19660.800: 98.2378% ( 11) 00:08:12.217 19660.800 - 19761.625: 98.3083% ( 6) 00:08:12.217 19761.625 - 19862.449: 98.3553% ( 4) 00:08:12.217 19862.449 - 19963.274: 98.4023% ( 4) 00:08:12.217 19963.274 - 20064.098: 98.4140% ( 1) 00:08:12.217 20064.098 - 20164.923: 98.4845% ( 6) 00:08:12.217 20164.923 - 20265.748: 98.4962% ( 1) 00:08:12.217 27625.945 - 27827.594: 98.5550% ( 5) 00:08:12.217 27827.594 - 28029.243: 98.6255% ( 6) 00:08:12.217 28029.243 - 28230.892: 98.7195% ( 8) 00:08:12.217 28230.892 - 28432.542: 98.8017% ( 7) 00:08:12.217 28432.542 - 28634.191: 98.8957% ( 8) 00:08:12.217 28634.191 - 28835.840: 98.9779% ( 7) 00:08:12.217 28835.840 - 29037.489: 99.0719% ( 8) 00:08:12.217 29037.489 - 29239.138: 99.1776% ( 9) 00:08:12.217 29239.138 - 29440.788: 99.2481% ( 6) 00:08:12.217 35691.914 - 35893.563: 99.3186% ( 6) 00:08:12.217 35893.563 - 36095.212: 99.4008% ( 7) 00:08:12.217 36095.212 - 36296.862: 99.4831% ( 7) 00:08:12.217 36296.862 - 36498.511: 99.5888% ( 9) 00:08:12.217 36498.511 - 36700.160: 99.6711% ( 7) 00:08:12.217 36700.160 - 36901.809: 99.7650% ( 8) 00:08:12.217 36901.809 - 37103.458: 99.8590% ( 8) 00:08:12.217 37103.458 - 37305.108: 99.9530% ( 8) 00:08:12.217 37305.108 - 37506.757: 100.0000% ( 4) 00:08:12.217 00:08:12.217 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:12.217 ============================================================================== 00:08:12.217 Range in us Cumulative IO count 00:08:12.217 10132.874 - 10183.286: 0.0117% ( 1) 00:08:12.217 10183.286 - 10233.698: 0.0587% ( 4) 00:08:12.217 10233.698 - 10284.111: 0.0822% ( 2) 00:08:12.217 10284.111 - 10334.523: 0.1175% ( 3) 00:08:12.217 10334.523 - 10384.935: 0.1527% ( 3) 00:08:12.217 10384.935 - 10435.348: 0.1762% ( 2) 00:08:12.217 10435.348 - 10485.760: 0.1997% ( 2) 00:08:12.217 10485.760 - 10536.172: 0.2232% ( 2) 00:08:12.217 10536.172 - 10586.585: 0.2585% ( 3) 00:08:12.217 10586.585 - 10636.997: 0.2937% ( 3) 00:08:12.217 10636.997 - 10687.409: 0.3289% ( 3) 00:08:12.217 10687.409 - 10737.822: 0.3759% ( 4) 00:08:12.217 10737.822 - 10788.234: 0.4347% ( 5) 00:08:12.217 10788.234 - 10838.646: 0.4817% ( 4) 00:08:12.217 10838.646 - 10889.058: 0.5287% ( 4) 00:08:12.217 10889.058 - 10939.471: 0.5757% ( 4) 00:08:12.217 10939.471 - 10989.883: 0.6344% ( 5) 00:08:12.217 10989.883 - 11040.295: 0.7049% ( 6) 00:08:12.217 11040.295 - 11090.708: 0.7519% ( 4) 00:08:12.217 11090.708 - 11141.120: 0.7871% ( 3) 00:08:12.217 11141.120 - 11191.532: 0.8459% ( 5) 00:08:12.217 11191.532 - 11241.945: 0.9046% ( 5) 00:08:12.217 11241.945 - 11292.357: 0.9868% ( 7) 00:08:12.217 11292.357 - 11342.769: 1.0808% ( 8) 00:08:12.217 11342.769 - 11393.182: 1.1631% ( 7) 00:08:12.217 11393.182 - 11443.594: 1.2336% ( 6) 00:08:12.217 11443.594 - 11494.006: 1.3040% ( 6) 00:08:12.217 11494.006 - 11544.418: 1.4098% ( 9) 00:08:12.217 11544.418 - 11594.831: 1.5860% ( 15) 00:08:12.217 11594.831 - 11645.243: 1.7740% ( 16) 00:08:12.217 11645.243 - 11695.655: 2.0324% ( 22) 00:08:12.217 11695.655 - 11746.068: 2.3261% ( 25) 00:08:12.217 11746.068 - 11796.480: 2.6316% ( 26) 00:08:12.217 11796.480 - 11846.892: 3.0193% ( 33) 00:08:12.217 11846.892 - 11897.305: 3.4305% ( 35) 00:08:12.217 11897.305 - 11947.717: 3.8651% ( 37) 00:08:12.217 11947.717 - 11998.129: 4.2998% ( 37) 00:08:12.217 11998.129 - 12048.542: 4.7932% ( 42) 00:08:12.217 12048.542 - 12098.954: 5.4041% ( 52) 00:08:12.217 12098.954 - 12149.366: 6.0385% ( 54) 00:08:12.217 12149.366 - 12199.778: 6.6612% ( 53) 00:08:12.217 12199.778 - 12250.191: 7.2251% ( 48) 00:08:12.217 12250.191 - 12300.603: 7.7655% ( 46) 00:08:12.217 12300.603 - 12351.015: 8.3529% ( 50) 00:08:12.217 12351.015 - 12401.428: 9.1165% ( 65) 00:08:12.217 12401.428 - 12451.840: 9.7979% ( 58) 00:08:12.217 12451.840 - 12502.252: 10.4676% ( 57) 00:08:12.217 12502.252 - 12552.665: 11.1020% ( 54) 00:08:12.217 12552.665 - 12603.077: 11.8891% ( 67) 00:08:12.217 12603.077 - 12653.489: 12.6645% ( 66) 00:08:12.217 12653.489 - 12703.902: 13.2989% ( 54) 00:08:12.217 12703.902 - 12754.314: 13.9685% ( 57) 00:08:12.217 12754.314 - 12804.726: 14.7674% ( 68) 00:08:12.217 12804.726 - 12855.138: 15.3548% ( 50) 00:08:12.217 12855.138 - 12905.551: 15.9774% ( 53) 00:08:12.217 12905.551 - 13006.375: 17.1875% ( 103) 00:08:12.217 13006.375 - 13107.200: 18.7970% ( 137) 00:08:12.217 13107.200 - 13208.025: 20.7589% ( 167) 00:08:12.217 13208.025 - 13308.849: 22.8971% ( 182) 00:08:12.217 13308.849 - 13409.674: 24.9295% ( 173) 00:08:12.217 13409.674 - 13510.498: 26.8210% ( 161) 00:08:12.217 13510.498 - 13611.323: 28.9944% ( 185) 00:08:12.217 13611.323 - 13712.148: 31.2383% ( 191) 00:08:12.217 13712.148 - 13812.972: 33.6701% ( 207) 00:08:12.217 13812.972 - 13913.797: 36.1725% ( 213) 00:08:12.217 13913.797 - 14014.622: 38.4868% ( 197) 00:08:12.217 14014.622 - 14115.446: 40.3783% ( 161) 00:08:12.217 14115.446 - 14216.271: 42.1992% ( 155) 00:08:12.217 14216.271 - 14317.095: 43.9497% ( 149) 00:08:12.217 14317.095 - 14417.920: 45.9117% ( 167) 00:08:12.217 14417.920 - 14518.745: 48.0381% ( 181) 00:08:12.217 14518.745 - 14619.569: 50.0117% ( 168) 00:08:12.217 14619.569 - 14720.394: 51.9619% ( 166) 00:08:12.217 14720.394 - 14821.218: 53.9239% ( 167) 00:08:12.217 14821.218 - 14922.043: 55.7331% ( 154) 00:08:12.217 14922.043 - 15022.868: 57.7068% ( 168) 00:08:12.217 15022.868 - 15123.692: 59.5630% ( 158) 00:08:12.217 15123.692 - 15224.517: 61.5249% ( 167) 00:08:12.217 15224.517 - 15325.342: 63.2636% ( 148) 00:08:12.217 15325.342 - 15426.166: 65.0141% ( 149) 00:08:12.217 15426.166 - 15526.991: 66.8351% ( 155) 00:08:12.217 15526.991 - 15627.815: 68.1861% ( 115) 00:08:12.217 15627.815 - 15728.640: 69.4666% ( 109) 00:08:12.217 15728.640 - 15829.465: 70.6884% ( 104) 00:08:12.217 15829.465 - 15930.289: 71.9572% ( 108) 00:08:12.217 15930.289 - 16031.114: 73.2378% ( 109) 00:08:12.217 16031.114 - 16131.938: 74.3656% ( 96) 00:08:12.217 16131.938 - 16232.763: 75.4347% ( 91) 00:08:12.217 16232.763 - 16333.588: 76.6095% ( 100) 00:08:12.217 16333.588 - 16434.412: 77.7138% ( 94) 00:08:12.217 16434.412 - 16535.237: 78.8299% ( 95) 00:08:12.217 16535.237 - 16636.062: 80.1339% ( 111) 00:08:12.217 16636.062 - 16736.886: 81.3910% ( 107) 00:08:12.217 16736.886 - 16837.711: 82.7068% ( 112) 00:08:12.217 16837.711 - 16938.535: 83.8581% ( 98) 00:08:12.217 16938.535 - 17039.360: 84.9507% ( 93) 00:08:12.217 17039.360 - 17140.185: 86.1372% ( 101) 00:08:12.217 17140.185 - 17241.009: 87.2415% ( 94) 00:08:12.217 17241.009 - 17341.834: 88.3224% ( 92) 00:08:12.217 17341.834 - 17442.658: 89.3210% ( 85) 00:08:12.217 17442.658 - 17543.483: 90.2373% ( 78) 00:08:12.217 17543.483 - 17644.308: 91.0127% ( 66) 00:08:12.217 17644.308 - 17745.132: 91.6706% ( 56) 00:08:12.217 17745.132 - 17845.957: 92.5047% ( 71) 00:08:12.217 17845.957 - 17946.782: 93.2683% ( 65) 00:08:12.218 17946.782 - 18047.606: 93.8087% ( 46) 00:08:12.218 18047.606 - 18148.431: 94.3844% ( 49) 00:08:12.218 18148.431 - 18249.255: 94.8661% ( 41) 00:08:12.218 18249.255 - 18350.080: 95.2890% ( 36) 00:08:12.218 18350.080 - 18450.905: 95.6414% ( 30) 00:08:12.218 18450.905 - 18551.729: 96.0174% ( 32) 00:08:12.218 18551.729 - 18652.554: 96.3463% ( 28) 00:08:12.218 18652.554 - 18753.378: 96.6635% ( 27) 00:08:12.218 18753.378 - 18854.203: 96.9220% ( 22) 00:08:12.218 18854.203 - 18955.028: 97.1570% ( 20) 00:08:12.218 18955.028 - 19055.852: 97.2979% ( 12) 00:08:12.218 19055.852 - 19156.677: 97.4272% ( 11) 00:08:12.218 19156.677 - 19257.502: 97.5799% ( 13) 00:08:12.218 19257.502 - 19358.326: 97.7209% ( 12) 00:08:12.218 19358.326 - 19459.151: 97.8501% ( 11) 00:08:12.218 19459.151 - 19559.975: 97.9676% ( 10) 00:08:12.218 19559.975 - 19660.800: 98.0733% ( 9) 00:08:12.218 19660.800 - 19761.625: 98.1790% ( 9) 00:08:12.218 19761.625 - 19862.449: 98.2848% ( 9) 00:08:12.218 19862.449 - 19963.274: 98.3553% ( 6) 00:08:12.218 19963.274 - 20064.098: 98.4140% ( 5) 00:08:12.218 20064.098 - 20164.923: 98.4492% ( 3) 00:08:12.218 20164.923 - 20265.748: 98.4845% ( 3) 00:08:12.218 20265.748 - 20366.572: 98.4962% ( 1) 00:08:12.218 26819.348 - 27020.997: 98.5667% ( 6) 00:08:12.218 27020.997 - 27222.646: 98.6607% ( 8) 00:08:12.218 27222.646 - 27424.295: 98.7547% ( 8) 00:08:12.218 27424.295 - 27625.945: 98.8487% ( 8) 00:08:12.218 27625.945 - 27827.594: 98.9427% ( 8) 00:08:12.218 27827.594 - 28029.243: 99.0367% ( 8) 00:08:12.218 28029.243 - 28230.892: 99.1424% ( 9) 00:08:12.218 28230.892 - 28432.542: 99.2364% ( 8) 00:08:12.218 28432.542 - 28634.191: 99.2481% ( 1) 00:08:12.218 34280.369 - 34482.018: 99.3304% ( 7) 00:08:12.218 34482.018 - 34683.668: 99.4243% ( 8) 00:08:12.218 34683.668 - 34885.317: 99.5066% ( 7) 00:08:12.218 34885.317 - 35086.966: 99.6123% ( 9) 00:08:12.218 35086.966 - 35288.615: 99.7063% ( 8) 00:08:12.218 35288.615 - 35490.265: 99.8120% ( 9) 00:08:12.218 35490.265 - 35691.914: 99.8943% ( 7) 00:08:12.218 35691.914 - 35893.563: 100.0000% ( 9) 00:08:12.218 00:08:12.218 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:12.218 ============================================================================== 00:08:12.218 Range in us Cumulative IO count 00:08:12.218 9326.277 - 9376.689: 0.0117% ( 1) 00:08:12.218 9376.689 - 9427.102: 0.0352% ( 2) 00:08:12.218 9427.102 - 9477.514: 0.0587% ( 2) 00:08:12.218 9477.514 - 9527.926: 0.0940% ( 3) 00:08:12.218 9527.926 - 9578.338: 0.1175% ( 2) 00:08:12.218 9578.338 - 9628.751: 0.1527% ( 3) 00:08:12.218 9628.751 - 9679.163: 0.1762% ( 2) 00:08:12.218 9679.163 - 9729.575: 0.2115% ( 3) 00:08:12.218 9729.575 - 9779.988: 0.2467% ( 3) 00:08:12.218 9779.988 - 9830.400: 0.2820% ( 3) 00:08:12.218 9830.400 - 9880.812: 0.3055% ( 2) 00:08:12.218 9880.812 - 9931.225: 0.3407% ( 3) 00:08:12.218 9931.225 - 9981.637: 0.3642% ( 2) 00:08:12.218 9981.637 - 10032.049: 0.3994% ( 3) 00:08:12.218 10032.049 - 10082.462: 0.4229% ( 2) 00:08:12.218 10082.462 - 10132.874: 0.4582% ( 3) 00:08:12.218 10132.874 - 10183.286: 0.4934% ( 3) 00:08:12.218 10183.286 - 10233.698: 0.5169% ( 2) 00:08:12.218 10233.698 - 10284.111: 0.5522% ( 3) 00:08:12.218 10284.111 - 10334.523: 0.5874% ( 3) 00:08:12.218 10334.523 - 10384.935: 0.6227% ( 3) 00:08:12.218 10384.935 - 10435.348: 0.6461% ( 2) 00:08:12.218 10435.348 - 10485.760: 0.6814% ( 3) 00:08:12.218 10485.760 - 10536.172: 0.7166% ( 3) 00:08:12.218 10536.172 - 10586.585: 0.7401% ( 2) 00:08:12.218 10586.585 - 10636.997: 0.7519% ( 1) 00:08:12.218 10889.058 - 10939.471: 0.7989% ( 4) 00:08:12.218 10939.471 - 10989.883: 0.8576% ( 5) 00:08:12.218 10989.883 - 11040.295: 0.9281% ( 6) 00:08:12.218 11040.295 - 11090.708: 0.9751% ( 4) 00:08:12.218 11090.708 - 11141.120: 1.0456% ( 6) 00:08:12.218 11141.120 - 11191.532: 1.0926% ( 4) 00:08:12.218 11191.532 - 11241.945: 1.2336% ( 12) 00:08:12.218 11241.945 - 11292.357: 1.3628% ( 11) 00:08:12.218 11292.357 - 11342.769: 1.5390% ( 15) 00:08:12.218 11342.769 - 11393.182: 1.7152% ( 15) 00:08:12.218 11393.182 - 11443.594: 1.9267% ( 18) 00:08:12.218 11443.594 - 11494.006: 2.1499% ( 19) 00:08:12.218 11494.006 - 11544.418: 2.3144% ( 14) 00:08:12.218 11544.418 - 11594.831: 2.5141% ( 17) 00:08:12.218 11594.831 - 11645.243: 2.6786% ( 14) 00:08:12.218 11645.243 - 11695.655: 2.9488% ( 23) 00:08:12.218 11695.655 - 11746.068: 3.1955% ( 21) 00:08:12.218 11746.068 - 11796.480: 3.4774% ( 24) 00:08:12.218 11796.480 - 11846.892: 3.6889% ( 18) 00:08:12.218 11846.892 - 11897.305: 3.9709% ( 24) 00:08:12.218 11897.305 - 11947.717: 4.2293% ( 22) 00:08:12.218 11947.717 - 11998.129: 4.4760% ( 21) 00:08:12.218 11998.129 - 12048.542: 4.8050% ( 28) 00:08:12.218 12048.542 - 12098.954: 5.1927% ( 33) 00:08:12.218 12098.954 - 12149.366: 5.6508% ( 39) 00:08:12.218 12149.366 - 12199.778: 6.1678% ( 44) 00:08:12.218 12199.778 - 12250.191: 6.6142% ( 38) 00:08:12.218 12250.191 - 12300.603: 7.0959% ( 41) 00:08:12.218 12300.603 - 12351.015: 7.6010% ( 43) 00:08:12.218 12351.015 - 12401.428: 8.0945% ( 42) 00:08:12.218 12401.428 - 12451.840: 8.6701% ( 49) 00:08:12.218 12451.840 - 12502.252: 9.3045% ( 54) 00:08:12.218 12502.252 - 12552.665: 10.0094% ( 60) 00:08:12.218 12552.665 - 12603.077: 10.7260% ( 61) 00:08:12.218 12603.077 - 12653.489: 11.5014% ( 66) 00:08:12.218 12653.489 - 12703.902: 12.2768% ( 66) 00:08:12.218 12703.902 - 12754.314: 13.1109% ( 71) 00:08:12.218 12754.314 - 12804.726: 13.9568% ( 72) 00:08:12.218 12804.726 - 12855.138: 14.8026% ( 72) 00:08:12.218 12855.138 - 12905.551: 15.7660% ( 82) 00:08:12.218 12905.551 - 13006.375: 17.8689% ( 179) 00:08:12.218 13006.375 - 13107.200: 19.8073% ( 165) 00:08:12.218 13107.200 - 13208.025: 21.6283% ( 155) 00:08:12.218 13208.025 - 13308.849: 23.4258% ( 153) 00:08:12.218 13308.849 - 13409.674: 25.3994% ( 168) 00:08:12.218 13409.674 - 13510.498: 27.5963% ( 187) 00:08:12.218 13510.498 - 13611.323: 29.4995% ( 162) 00:08:12.218 13611.323 - 13712.148: 31.3440% ( 157) 00:08:12.218 13712.148 - 13812.972: 33.4469% ( 179) 00:08:12.218 13812.972 - 13913.797: 35.5028% ( 175) 00:08:12.218 13913.797 - 14014.622: 37.3238% ( 155) 00:08:12.218 14014.622 - 14115.446: 39.2387% ( 163) 00:08:12.218 14115.446 - 14216.271: 41.1184% ( 160) 00:08:12.218 14216.271 - 14317.095: 42.8219% ( 145) 00:08:12.218 14317.095 - 14417.920: 44.7838% ( 167) 00:08:12.218 14417.920 - 14518.745: 46.6165% ( 156) 00:08:12.218 14518.745 - 14619.569: 48.5785% ( 167) 00:08:12.218 14619.569 - 14720.394: 50.3524% ( 151) 00:08:12.218 14720.394 - 14821.218: 52.3261% ( 168) 00:08:12.218 14821.218 - 14922.043: 54.4525% ( 181) 00:08:12.218 14922.043 - 15022.868: 56.3322% ( 160) 00:08:12.218 15022.868 - 15123.692: 58.0357% ( 145) 00:08:12.218 15123.692 - 15224.517: 59.6922% ( 141) 00:08:12.218 15224.517 - 15325.342: 61.2077% ( 129) 00:08:12.218 15325.342 - 15426.166: 62.8407% ( 139) 00:08:12.218 15426.166 - 15526.991: 64.5089% ( 142) 00:08:12.218 15526.991 - 15627.815: 66.4944% ( 169) 00:08:12.218 15627.815 - 15728.640: 68.5620% ( 176) 00:08:12.218 15728.640 - 15829.465: 70.5122% ( 166) 00:08:12.218 15829.465 - 15930.289: 72.2862% ( 151) 00:08:12.218 15930.289 - 16031.114: 73.8839% ( 136) 00:08:12.218 16031.114 - 16131.938: 75.4817% ( 136) 00:08:12.218 16131.938 - 16232.763: 77.1382% ( 141) 00:08:12.218 16232.763 - 16333.588: 78.7124% ( 134) 00:08:12.218 16333.588 - 16434.412: 80.0869% ( 117) 00:08:12.218 16434.412 - 16535.237: 81.2735% ( 101) 00:08:12.218 16535.237 - 16636.062: 82.4366% ( 99) 00:08:12.218 16636.062 - 16736.886: 83.3764% ( 80) 00:08:12.218 16736.886 - 16837.711: 84.2928% ( 78) 00:08:12.218 16837.711 - 16938.535: 85.1269% ( 71) 00:08:12.218 16938.535 - 17039.360: 85.9845% ( 73) 00:08:12.218 17039.360 - 17140.185: 86.8891% ( 77) 00:08:12.218 17140.185 - 17241.009: 87.7585% ( 74) 00:08:12.218 17241.009 - 17341.834: 88.5808% ( 70) 00:08:12.218 17341.834 - 17442.658: 89.5912% ( 86) 00:08:12.218 17442.658 - 17543.483: 90.4253% ( 71) 00:08:12.218 17543.483 - 17644.308: 91.1537% ( 62) 00:08:12.218 17644.308 - 17745.132: 91.9643% ( 69) 00:08:12.218 17745.132 - 17845.957: 92.8689% ( 77) 00:08:12.218 17845.957 - 17946.782: 93.7383% ( 74) 00:08:12.218 17946.782 - 18047.606: 94.5959% ( 73) 00:08:12.218 18047.606 - 18148.431: 95.2420% ( 55) 00:08:12.218 18148.431 - 18249.255: 95.7472% ( 43) 00:08:12.218 18249.255 - 18350.080: 96.2054% ( 39) 00:08:12.218 18350.080 - 18450.905: 96.5695% ( 31) 00:08:12.218 18450.905 - 18551.729: 96.9455% ( 32) 00:08:12.218 18551.729 - 18652.554: 97.1687% ( 19) 00:08:12.218 18652.554 - 18753.378: 97.4037% ( 20) 00:08:12.218 18753.378 - 18854.203: 97.6386% ( 20) 00:08:12.218 18854.203 - 18955.028: 97.7796% ( 12) 00:08:12.218 18955.028 - 19055.852: 97.9441% ( 14) 00:08:12.218 19055.852 - 19156.677: 98.0616% ( 10) 00:08:12.218 19156.677 - 19257.502: 98.1673% ( 9) 00:08:12.218 19257.502 - 19358.326: 98.2378% ( 6) 00:08:12.218 19358.326 - 19459.151: 98.2848% ( 4) 00:08:12.218 19459.151 - 19559.975: 98.3083% ( 2) 00:08:12.218 19559.975 - 19660.800: 98.3435% ( 3) 00:08:12.218 19660.800 - 19761.625: 98.3788% ( 3) 00:08:12.218 19761.625 - 19862.449: 98.4258% ( 4) 00:08:12.219 19862.449 - 19963.274: 98.4610% ( 3) 00:08:12.219 19963.274 - 20064.098: 98.4845% ( 2) 00:08:12.219 20064.098 - 20164.923: 98.4962% ( 1) 00:08:12.219 26214.400 - 26416.049: 98.5550% ( 5) 00:08:12.219 26416.049 - 26617.698: 98.6490% ( 8) 00:08:12.219 26617.698 - 26819.348: 98.7430% ( 8) 00:08:12.219 26819.348 - 27020.997: 98.8487% ( 9) 00:08:12.219 27020.997 - 27222.646: 98.9309% ( 7) 00:08:12.219 27222.646 - 27424.295: 99.0249% ( 8) 00:08:12.219 27424.295 - 27625.945: 99.1189% ( 8) 00:08:12.219 27625.945 - 27827.594: 99.2246% ( 9) 00:08:12.219 27827.594 - 28029.243: 99.2481% ( 2) 00:08:12.219 33473.772 - 33675.422: 99.2599% ( 1) 00:08:12.219 33675.422 - 33877.071: 99.3069% ( 4) 00:08:12.219 33877.071 - 34078.720: 99.3656% ( 5) 00:08:12.219 34078.720 - 34280.369: 99.4361% ( 6) 00:08:12.219 34280.369 - 34482.018: 99.4948% ( 5) 00:08:12.219 34482.018 - 34683.668: 99.5771% ( 7) 00:08:12.219 34683.668 - 34885.317: 99.6711% ( 8) 00:08:12.219 34885.317 - 35086.966: 99.7650% ( 8) 00:08:12.219 35086.966 - 35288.615: 99.8708% ( 9) 00:08:12.219 35288.615 - 35490.265: 99.9648% ( 8) 00:08:12.219 35490.265 - 35691.914: 100.0000% ( 3) 00:08:12.219 00:08:12.219 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:12.219 ============================================================================== 00:08:12.219 Range in us Cumulative IO count 00:08:12.219 9225.452 - 9275.865: 0.0235% ( 2) 00:08:12.219 9275.865 - 9326.277: 0.0470% ( 2) 00:08:12.219 9326.277 - 9376.689: 0.0822% ( 3) 00:08:12.219 9376.689 - 9427.102: 0.1057% ( 2) 00:08:12.219 9427.102 - 9477.514: 0.1410% ( 3) 00:08:12.219 9477.514 - 9527.926: 0.1762% ( 3) 00:08:12.219 9527.926 - 9578.338: 0.1997% ( 2) 00:08:12.219 9578.338 - 9628.751: 0.2467% ( 4) 00:08:12.219 9628.751 - 9679.163: 0.2820% ( 3) 00:08:12.219 9679.163 - 9729.575: 0.3055% ( 2) 00:08:12.219 9729.575 - 9779.988: 0.3407% ( 3) 00:08:12.219 9779.988 - 9830.400: 0.3642% ( 2) 00:08:12.219 9830.400 - 9880.812: 0.3877% ( 2) 00:08:12.219 9880.812 - 9931.225: 0.4229% ( 3) 00:08:12.219 9931.225 - 9981.637: 0.4582% ( 3) 00:08:12.219 9981.637 - 10032.049: 0.4817% ( 2) 00:08:12.219 10032.049 - 10082.462: 0.5169% ( 3) 00:08:12.219 10082.462 - 10132.874: 0.5522% ( 3) 00:08:12.219 10132.874 - 10183.286: 0.5757% ( 2) 00:08:12.219 10183.286 - 10233.698: 0.6109% ( 3) 00:08:12.219 10233.698 - 10284.111: 0.6461% ( 3) 00:08:12.219 10284.111 - 10334.523: 0.6696% ( 2) 00:08:12.219 10334.523 - 10384.935: 0.7049% ( 3) 00:08:12.219 10384.935 - 10435.348: 0.7401% ( 3) 00:08:12.219 10435.348 - 10485.760: 0.7519% ( 1) 00:08:12.219 11191.532 - 11241.945: 0.7636% ( 1) 00:08:12.219 11241.945 - 11292.357: 0.7989% ( 3) 00:08:12.219 11292.357 - 11342.769: 0.8459% ( 4) 00:08:12.219 11342.769 - 11393.182: 0.9398% ( 8) 00:08:12.219 11393.182 - 11443.594: 1.1866% ( 21) 00:08:12.219 11443.594 - 11494.006: 1.3628% ( 15) 00:08:12.219 11494.006 - 11544.418: 1.5390% ( 15) 00:08:12.219 11544.418 - 11594.831: 1.8092% ( 23) 00:08:12.219 11594.831 - 11645.243: 2.2439% ( 37) 00:08:12.219 11645.243 - 11695.655: 2.5728% ( 28) 00:08:12.219 11695.655 - 11746.068: 2.9135% ( 29) 00:08:12.219 11746.068 - 11796.480: 3.3247% ( 35) 00:08:12.219 11796.480 - 11846.892: 3.7007% ( 32) 00:08:12.219 11846.892 - 11897.305: 4.1236% ( 36) 00:08:12.219 11897.305 - 11947.717: 4.5818% ( 39) 00:08:12.219 11947.717 - 11998.129: 5.0634% ( 41) 00:08:12.219 11998.129 - 12048.542: 5.7448% ( 58) 00:08:12.219 12048.542 - 12098.954: 6.3087% ( 48) 00:08:12.219 12098.954 - 12149.366: 6.8844% ( 49) 00:08:12.219 12149.366 - 12199.778: 7.4483% ( 48) 00:08:12.219 12199.778 - 12250.191: 8.0357% ( 50) 00:08:12.219 12250.191 - 12300.603: 8.6466% ( 52) 00:08:12.219 12300.603 - 12351.015: 9.3045% ( 56) 00:08:12.219 12351.015 - 12401.428: 9.9859% ( 58) 00:08:12.219 12401.428 - 12451.840: 10.7378% ( 64) 00:08:12.219 12451.840 - 12502.252: 11.5132% ( 66) 00:08:12.219 12502.252 - 12552.665: 12.2533% ( 63) 00:08:12.219 12552.665 - 12603.077: 12.8994% ( 55) 00:08:12.219 12603.077 - 12653.489: 13.5573% ( 56) 00:08:12.219 12653.489 - 12703.902: 14.3445% ( 67) 00:08:12.219 12703.902 - 12754.314: 15.1551% ( 69) 00:08:12.219 12754.314 - 12804.726: 15.7660% ( 52) 00:08:12.219 12804.726 - 12855.138: 16.4709% ( 60) 00:08:12.219 12855.138 - 12905.551: 17.1523% ( 58) 00:08:12.219 12905.551 - 13006.375: 18.5738% ( 121) 00:08:12.219 13006.375 - 13107.200: 20.1715% ( 136) 00:08:12.219 13107.200 - 13208.025: 21.9572% ( 152) 00:08:12.219 13208.025 - 13308.849: 23.6842% ( 147) 00:08:12.219 13308.849 - 13409.674: 25.6109% ( 164) 00:08:12.219 13409.674 - 13510.498: 27.5493% ( 165) 00:08:12.219 13510.498 - 13611.323: 29.3938% ( 157) 00:08:12.219 13611.323 - 13712.148: 31.3322% ( 165) 00:08:12.219 13712.148 - 13812.972: 33.3647% ( 173) 00:08:12.219 13812.972 - 13913.797: 35.5616% ( 187) 00:08:12.219 13913.797 - 14014.622: 37.7820% ( 189) 00:08:12.219 14014.622 - 14115.446: 39.7086% ( 164) 00:08:12.219 14115.446 - 14216.271: 41.5179% ( 154) 00:08:12.219 14216.271 - 14317.095: 43.1861% ( 142) 00:08:12.219 14317.095 - 14417.920: 44.8073% ( 138) 00:08:12.219 14417.920 - 14518.745: 46.4051% ( 136) 00:08:12.219 14518.745 - 14619.569: 48.2730% ( 159) 00:08:12.219 14619.569 - 14720.394: 50.2702% ( 170) 00:08:12.219 14720.394 - 14821.218: 52.0794% ( 154) 00:08:12.219 14821.218 - 14922.043: 53.8299% ( 149) 00:08:12.219 14922.043 - 15022.868: 55.4041% ( 134) 00:08:12.219 15022.868 - 15123.692: 57.1311% ( 147) 00:08:12.219 15123.692 - 15224.517: 58.7876% ( 141) 00:08:12.219 15224.517 - 15325.342: 60.5733% ( 152) 00:08:12.219 15325.342 - 15426.166: 62.4530% ( 160) 00:08:12.219 15426.166 - 15526.991: 64.4854% ( 173) 00:08:12.219 15526.991 - 15627.815: 66.3886% ( 162) 00:08:12.219 15627.815 - 15728.640: 68.1508% ( 150) 00:08:12.219 15728.640 - 15829.465: 69.8308% ( 143) 00:08:12.219 15829.465 - 15930.289: 71.5108% ( 143) 00:08:12.219 15930.289 - 16031.114: 73.2260% ( 146) 00:08:12.219 16031.114 - 16131.938: 74.7650% ( 131) 00:08:12.219 16131.938 - 16232.763: 76.5155% ( 149) 00:08:12.219 16232.763 - 16333.588: 78.1015% ( 135) 00:08:12.219 16333.588 - 16434.412: 79.4408% ( 114) 00:08:12.219 16434.412 - 16535.237: 80.7448% ( 111) 00:08:12.219 16535.237 - 16636.062: 82.1076% ( 116) 00:08:12.219 16636.062 - 16736.886: 83.2237% ( 95) 00:08:12.219 16736.886 - 16837.711: 84.2458% ( 87) 00:08:12.219 16837.711 - 16938.535: 85.2091% ( 82) 00:08:12.219 16938.535 - 17039.360: 86.1490% ( 80) 00:08:12.219 17039.360 - 17140.185: 87.0301% ( 75) 00:08:12.219 17140.185 - 17241.009: 87.7937% ( 65) 00:08:12.219 17241.009 - 17341.834: 88.5691% ( 66) 00:08:12.219 17341.834 - 17442.658: 89.4267% ( 73) 00:08:12.219 17442.658 - 17543.483: 90.2256% ( 68) 00:08:12.219 17543.483 - 17644.308: 91.0597% ( 71) 00:08:12.219 17644.308 - 17745.132: 91.7998% ( 63) 00:08:12.219 17745.132 - 17845.957: 92.5517% ( 64) 00:08:12.219 17845.957 - 17946.782: 93.2566% ( 60) 00:08:12.219 17946.782 - 18047.606: 93.9380% ( 58) 00:08:12.219 18047.606 - 18148.431: 94.5606% ( 53) 00:08:12.219 18148.431 - 18249.255: 95.0893% ( 45) 00:08:12.219 18249.255 - 18350.080: 95.5945% ( 43) 00:08:12.219 18350.080 - 18450.905: 96.0409% ( 38) 00:08:12.219 18450.905 - 18551.729: 96.4403% ( 34) 00:08:12.219 18551.729 - 18652.554: 96.8163% ( 32) 00:08:12.219 18652.554 - 18753.378: 97.1687% ( 30) 00:08:12.219 18753.378 - 18854.203: 97.4742% ( 26) 00:08:12.219 18854.203 - 18955.028: 97.7209% ( 21) 00:08:12.219 18955.028 - 19055.852: 97.9088% ( 16) 00:08:12.219 19055.852 - 19156.677: 98.0616% ( 13) 00:08:12.219 19156.677 - 19257.502: 98.1320% ( 6) 00:08:12.219 19257.502 - 19358.326: 98.1673% ( 3) 00:08:12.219 19358.326 - 19459.151: 98.2025% ( 3) 00:08:12.219 19459.151 - 19559.975: 98.2378% ( 3) 00:08:12.219 19559.975 - 19660.800: 98.2848% ( 4) 00:08:12.219 19660.800 - 19761.625: 98.3083% ( 2) 00:08:12.219 19761.625 - 19862.449: 98.3435% ( 3) 00:08:12.220 19862.449 - 19963.274: 98.3788% ( 3) 00:08:12.220 19963.274 - 20064.098: 98.4140% ( 3) 00:08:12.220 20064.098 - 20164.923: 98.4610% ( 4) 00:08:12.220 20164.923 - 20265.748: 98.4845% ( 2) 00:08:12.220 20265.748 - 20366.572: 98.4962% ( 1) 00:08:12.220 24903.680 - 25004.505: 98.5197% ( 2) 00:08:12.220 25004.505 - 25105.329: 98.5667% ( 4) 00:08:12.220 25105.329 - 25206.154: 98.6137% ( 4) 00:08:12.220 25206.154 - 25306.978: 98.6607% ( 4) 00:08:12.220 25306.978 - 25407.803: 98.7077% ( 4) 00:08:12.220 25407.803 - 25508.628: 98.7547% ( 4) 00:08:12.220 25508.628 - 25609.452: 98.8017% ( 4) 00:08:12.220 25609.452 - 25710.277: 98.8487% ( 4) 00:08:12.220 25710.277 - 25811.102: 98.8957% ( 4) 00:08:12.220 25811.102 - 26012.751: 98.9897% ( 8) 00:08:12.220 26012.751 - 26214.400: 99.0954% ( 9) 00:08:12.220 26214.400 - 26416.049: 99.1894% ( 8) 00:08:12.220 26416.049 - 26617.698: 99.2481% ( 5) 00:08:12.220 32062.228 - 32263.877: 99.3069% ( 5) 00:08:12.220 32263.877 - 32465.526: 99.4126% ( 9) 00:08:12.220 32465.526 - 32667.175: 99.4713% ( 5) 00:08:12.220 32667.175 - 32868.825: 99.5771% ( 9) 00:08:12.220 32868.825 - 33070.474: 99.6711% ( 8) 00:08:12.220 33070.474 - 33272.123: 99.7650% ( 8) 00:08:12.220 33272.123 - 33473.772: 99.8590% ( 8) 00:08:12.220 33473.772 - 33675.422: 99.9530% ( 8) 00:08:12.220 33675.422 - 33877.071: 100.0000% ( 4) 00:08:12.220 00:08:12.220 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:12.220 ============================================================================== 00:08:12.220 Range in us Cumulative IO count 00:08:12.220 8973.391 - 9023.803: 0.0233% ( 2) 00:08:12.220 9023.803 - 9074.215: 0.0466% ( 2) 00:08:12.220 9074.215 - 9124.628: 0.0816% ( 3) 00:08:12.220 9124.628 - 9175.040: 0.1049% ( 2) 00:08:12.220 9175.040 - 9225.452: 0.1749% ( 6) 00:08:12.220 9225.452 - 9275.865: 0.2449% ( 6) 00:08:12.220 9275.865 - 9326.277: 0.2799% ( 3) 00:08:12.220 9326.277 - 9376.689: 0.3032% ( 2) 00:08:12.220 9376.689 - 9427.102: 0.3265% ( 2) 00:08:12.220 9477.514 - 9527.926: 0.3498% ( 2) 00:08:12.220 9527.926 - 9578.338: 0.3848% ( 3) 00:08:12.220 9578.338 - 9628.751: 0.4081% ( 2) 00:08:12.220 9628.751 - 9679.163: 0.4314% ( 2) 00:08:12.220 9679.163 - 9729.575: 0.4664% ( 3) 00:08:12.220 9729.575 - 9779.988: 0.5014% ( 3) 00:08:12.220 9779.988 - 9830.400: 0.5247% ( 2) 00:08:12.220 9830.400 - 9880.812: 0.5597% ( 3) 00:08:12.220 9880.812 - 9931.225: 0.5947% ( 3) 00:08:12.220 9931.225 - 9981.637: 0.6180% ( 2) 00:08:12.220 9981.637 - 10032.049: 0.6530% ( 3) 00:08:12.220 10032.049 - 10082.462: 0.6880% ( 3) 00:08:12.220 10082.462 - 10132.874: 0.7113% ( 2) 00:08:12.220 10132.874 - 10183.286: 0.7346% ( 2) 00:08:12.220 10183.286 - 10233.698: 0.7463% ( 1) 00:08:12.220 11141.120 - 11191.532: 0.7696% ( 2) 00:08:12.220 11191.532 - 11241.945: 0.8046% ( 3) 00:08:12.220 11241.945 - 11292.357: 0.8396% ( 3) 00:08:12.220 11292.357 - 11342.769: 0.9445% ( 9) 00:08:12.220 11342.769 - 11393.182: 1.0728% ( 11) 00:08:12.220 11393.182 - 11443.594: 1.2127% ( 12) 00:08:12.220 11443.594 - 11494.006: 1.3643% ( 13) 00:08:12.220 11494.006 - 11544.418: 1.5858% ( 19) 00:08:12.220 11544.418 - 11594.831: 1.7374% ( 13) 00:08:12.220 11594.831 - 11645.243: 1.9356% ( 17) 00:08:12.220 11645.243 - 11695.655: 2.1572% ( 19) 00:08:12.220 11695.655 - 11746.068: 2.4021% ( 21) 00:08:12.220 11746.068 - 11796.480: 2.6469% ( 21) 00:08:12.220 11796.480 - 11846.892: 2.9501% ( 26) 00:08:12.220 11846.892 - 11897.305: 3.3699% ( 36) 00:08:12.220 11897.305 - 11947.717: 3.8946% ( 45) 00:08:12.220 11947.717 - 11998.129: 4.5009% ( 52) 00:08:12.220 11998.129 - 12048.542: 5.0140% ( 44) 00:08:12.220 12048.542 - 12098.954: 5.5387% ( 45) 00:08:12.220 12098.954 - 12149.366: 6.0634% ( 45) 00:08:12.220 12149.366 - 12199.778: 6.7514% ( 59) 00:08:12.220 12199.778 - 12250.191: 7.4044% ( 56) 00:08:12.220 12250.191 - 12300.603: 8.0924% ( 59) 00:08:12.220 12300.603 - 12351.015: 8.7220% ( 54) 00:08:12.220 12351.015 - 12401.428: 9.4100% ( 59) 00:08:12.220 12401.428 - 12451.840: 10.2729% ( 74) 00:08:12.220 12451.840 - 12502.252: 11.2174% ( 81) 00:08:12.220 12502.252 - 12552.665: 12.1035% ( 76) 00:08:12.220 12552.665 - 12603.077: 12.9897% ( 76) 00:08:12.220 12603.077 - 12653.489: 13.9576% ( 83) 00:08:12.220 12653.489 - 12703.902: 14.8554% ( 77) 00:08:12.220 12703.902 - 12754.314: 15.6950% ( 72) 00:08:12.220 12754.314 - 12804.726: 16.5695% ( 75) 00:08:12.220 12804.726 - 12855.138: 17.4674% ( 77) 00:08:12.220 12855.138 - 12905.551: 18.4002% ( 80) 00:08:12.220 12905.551 - 13006.375: 20.2892% ( 162) 00:08:12.220 13006.375 - 13107.200: 22.1665% ( 161) 00:08:12.220 13107.200 - 13208.025: 23.9972% ( 157) 00:08:12.220 13208.025 - 13308.849: 25.8512% ( 159) 00:08:12.220 13308.849 - 13409.674: 27.6936% ( 158) 00:08:12.220 13409.674 - 13510.498: 29.5009% ( 155) 00:08:12.220 13510.498 - 13611.323: 31.1567% ( 142) 00:08:12.220 13611.323 - 13712.148: 32.5793% ( 122) 00:08:12.221 13712.148 - 13812.972: 34.3867% ( 155) 00:08:12.221 13812.972 - 13913.797: 36.2523% ( 160) 00:08:12.221 13913.797 - 14014.622: 38.1530% ( 163) 00:08:12.221 14014.622 - 14115.446: 39.9487% ( 154) 00:08:12.221 14115.446 - 14216.271: 41.7910% ( 158) 00:08:12.221 14216.271 - 14317.095: 43.7383% ( 167) 00:08:12.221 14317.095 - 14417.920: 45.3125% ( 135) 00:08:12.221 14417.920 - 14518.745: 46.9100% ( 137) 00:08:12.221 14518.745 - 14619.569: 48.5541% ( 141) 00:08:12.221 14619.569 - 14720.394: 50.4198% ( 160) 00:08:12.221 14720.394 - 14821.218: 52.3554% ( 166) 00:08:12.221 14821.218 - 14922.043: 54.3960% ( 175) 00:08:12.221 14922.043 - 15022.868: 56.3783% ( 170) 00:08:12.221 15022.868 - 15123.692: 58.2323% ( 159) 00:08:12.221 15123.692 - 15224.517: 59.9347% ( 146) 00:08:12.221 15224.517 - 15325.342: 61.6371% ( 146) 00:08:12.221 15325.342 - 15426.166: 63.5028% ( 160) 00:08:12.221 15426.166 - 15526.991: 65.1469% ( 141) 00:08:12.221 15526.991 - 15627.815: 66.9193% ( 152) 00:08:12.221 15627.815 - 15728.640: 68.7850% ( 160) 00:08:12.221 15728.640 - 15829.465: 70.5340% ( 150) 00:08:12.221 15829.465 - 15930.289: 71.9566% ( 122) 00:08:12.221 15930.289 - 16031.114: 73.3675% ( 121) 00:08:12.221 16031.114 - 16131.938: 74.6618% ( 111) 00:08:12.221 16131.938 - 16232.763: 76.1311% ( 126) 00:08:12.221 16232.763 - 16333.588: 77.5070% ( 118) 00:08:12.221 16333.588 - 16434.412: 78.7430% ( 106) 00:08:12.221 16434.412 - 16535.237: 80.0606% ( 113) 00:08:12.221 16535.237 - 16636.062: 81.3316% ( 109) 00:08:12.221 16636.062 - 16736.886: 82.4160% ( 93) 00:08:12.221 16736.886 - 16837.711: 83.6521% ( 106) 00:08:12.221 16837.711 - 16938.535: 84.9230% ( 109) 00:08:12.221 16938.535 - 17039.360: 86.1007% ( 101) 00:08:12.221 17039.360 - 17140.185: 87.2318% ( 97) 00:08:12.221 17140.185 - 17241.009: 88.3279% ( 94) 00:08:12.221 17241.009 - 17341.834: 89.3190% ( 85) 00:08:12.221 17341.834 - 17442.658: 90.2519% ( 80) 00:08:12.221 17442.658 - 17543.483: 91.0798% ( 71) 00:08:12.221 17543.483 - 17644.308: 91.8144% ( 63) 00:08:12.221 17644.308 - 17745.132: 92.3974% ( 50) 00:08:12.221 17745.132 - 17845.957: 93.0037% ( 52) 00:08:12.221 17845.957 - 17946.782: 93.5868% ( 50) 00:08:12.221 17946.782 - 18047.606: 94.1698% ( 50) 00:08:12.221 18047.606 - 18148.431: 94.6479% ( 41) 00:08:12.221 18148.431 - 18249.255: 95.1726% ( 45) 00:08:12.221 18249.255 - 18350.080: 95.6973% ( 45) 00:08:12.221 18350.080 - 18450.905: 96.1287% ( 37) 00:08:12.221 18450.905 - 18551.729: 96.5835% ( 39) 00:08:12.221 18551.729 - 18652.554: 97.0149% ( 37) 00:08:12.221 18652.554 - 18753.378: 97.3531% ( 29) 00:08:12.221 18753.378 - 18854.203: 97.5513% ( 17) 00:08:12.221 18854.203 - 18955.028: 97.7962% ( 21) 00:08:12.221 18955.028 - 19055.852: 98.2393% ( 38) 00:08:12.221 19055.852 - 19156.677: 98.4608% ( 19) 00:08:12.221 19156.677 - 19257.502: 98.6474% ( 16) 00:08:12.221 19257.502 - 19358.326: 98.7174% ( 6) 00:08:12.221 19358.326 - 19459.151: 98.8223% ( 9) 00:08:12.221 19459.151 - 19559.975: 98.9156% ( 8) 00:08:12.221 19559.975 - 19660.800: 99.0205% ( 9) 00:08:12.221 19660.800 - 19761.625: 99.1138% ( 8) 00:08:12.221 19761.625 - 19862.449: 99.1721% ( 5) 00:08:12.221 19862.449 - 19963.274: 99.2071% ( 3) 00:08:12.221 19963.274 - 20064.098: 99.2421% ( 3) 00:08:12.221 20064.098 - 20164.923: 99.2537% ( 1) 00:08:12.221 24903.680 - 25004.505: 99.2887% ( 3) 00:08:12.221 25004.505 - 25105.329: 99.3354% ( 4) 00:08:12.221 25105.329 - 25206.154: 99.3820% ( 4) 00:08:12.221 25206.154 - 25306.978: 99.4403% ( 5) 00:08:12.221 25306.978 - 25407.803: 99.4869% ( 4) 00:08:12.221 25407.803 - 25508.628: 99.5336% ( 4) 00:08:12.221 25508.628 - 25609.452: 99.5802% ( 4) 00:08:12.221 25609.452 - 25710.277: 99.6269% ( 4) 00:08:12.221 25710.277 - 25811.102: 99.6735% ( 4) 00:08:12.221 25811.102 - 26012.751: 99.7785% ( 9) 00:08:12.221 26012.751 - 26214.400: 99.8717% ( 8) 00:08:12.221 26214.400 - 26416.049: 99.9767% ( 9) 00:08:12.221 26416.049 - 26617.698: 100.0000% ( 2) 00:08:12.221 00:08:12.221 11:24:11 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:13.606 Initializing NVMe Controllers 00:08:13.606 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:13.606 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:13.606 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:13.606 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:13.606 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:13.606 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:13.606 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:13.606 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:13.606 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:13.606 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:13.606 Initialization complete. Launching workers. 00:08:13.606 ======================================================== 00:08:13.606 Latency(us) 00:08:13.606 Device Information : IOPS MiB/s Average min max 00:08:13.606 PCIE (0000:00:13.0) NSID 1 from core 0: 9918.28 116.23 12926.07 8556.00 36625.43 00:08:13.606 PCIE (0000:00:10.0) NSID 1 from core 0: 9918.28 116.23 12906.36 8989.03 35148.98 00:08:13.606 PCIE (0000:00:11.0) NSID 1 from core 0: 9918.28 116.23 12885.19 9129.62 33581.83 00:08:13.606 PCIE (0000:00:12.0) NSID 1 from core 0: 9918.28 116.23 12865.06 8701.24 32650.26 00:08:13.606 PCIE (0000:00:12.0) NSID 2 from core 0: 9918.28 116.23 12844.51 9003.42 31837.51 00:08:13.606 PCIE (0000:00:12.0) NSID 3 from core 0: 9982.27 116.98 12741.98 8645.62 24192.88 00:08:13.606 ======================================================== 00:08:13.606 Total : 59573.69 698.13 12861.40 8556.00 36625.43 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9376.689us 00:08:13.606 10.00000% : 10636.997us 00:08:13.606 25.00000% : 11393.182us 00:08:13.606 50.00000% : 12250.191us 00:08:13.606 75.00000% : 13913.797us 00:08:13.606 90.00000% : 15930.289us 00:08:13.606 95.00000% : 16736.886us 00:08:13.606 98.00000% : 17946.782us 00:08:13.606 99.00000% : 28835.840us 00:08:13.606 99.50000% : 35490.265us 00:08:13.606 99.90000% : 36498.511us 00:08:13.606 99.99000% : 36700.160us 00:08:13.606 99.99900% : 36700.160us 00:08:13.606 99.99990% : 36700.160us 00:08:13.606 99.99999% : 36700.160us 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9477.514us 00:08:13.606 10.00000% : 10636.997us 00:08:13.606 25.00000% : 11292.357us 00:08:13.606 50.00000% : 12300.603us 00:08:13.606 75.00000% : 13812.972us 00:08:13.606 90.00000% : 15930.289us 00:08:13.606 95.00000% : 16736.886us 00:08:13.606 98.00000% : 17845.957us 00:08:13.606 99.00000% : 27827.594us 00:08:13.606 99.50000% : 33877.071us 00:08:13.606 99.90000% : 35086.966us 00:08:13.606 99.99000% : 35288.615us 00:08:13.606 99.99900% : 35288.615us 00:08:13.606 99.99990% : 35288.615us 00:08:13.606 99.99999% : 35288.615us 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9527.926us 00:08:13.606 10.00000% : 10737.822us 00:08:13.606 25.00000% : 11342.769us 00:08:13.606 50.00000% : 12250.191us 00:08:13.606 75.00000% : 13913.797us 00:08:13.606 90.00000% : 15829.465us 00:08:13.606 95.00000% : 17039.360us 00:08:13.606 98.00000% : 17946.782us 00:08:13.606 99.00000% : 26012.751us 00:08:13.606 99.50000% : 32465.526us 00:08:13.606 99.90000% : 33473.772us 00:08:13.606 99.99000% : 33675.422us 00:08:13.606 99.99900% : 33675.422us 00:08:13.606 99.99990% : 33675.422us 00:08:13.606 99.99999% : 33675.422us 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9326.277us 00:08:13.606 10.00000% : 10636.997us 00:08:13.606 25.00000% : 11292.357us 00:08:13.606 50.00000% : 12250.191us 00:08:13.606 75.00000% : 14014.622us 00:08:13.606 90.00000% : 15829.465us 00:08:13.606 95.00000% : 16938.535us 00:08:13.606 98.00000% : 18148.431us 00:08:13.606 99.00000% : 25508.628us 00:08:13.606 99.50000% : 31658.929us 00:08:13.606 99.90000% : 32465.526us 00:08:13.606 99.99000% : 32667.175us 00:08:13.606 99.99900% : 32667.175us 00:08:13.606 99.99990% : 32667.175us 00:08:13.606 99.99999% : 32667.175us 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9376.689us 00:08:13.606 10.00000% : 10687.409us 00:08:13.606 25.00000% : 11241.945us 00:08:13.606 50.00000% : 12149.366us 00:08:13.606 75.00000% : 14115.446us 00:08:13.606 90.00000% : 15728.640us 00:08:13.606 95.00000% : 17039.360us 00:08:13.606 98.00000% : 17946.782us 00:08:13.606 99.00000% : 23592.960us 00:08:13.606 99.50000% : 30650.683us 00:08:13.606 99.90000% : 31658.929us 00:08:13.606 99.99000% : 31860.578us 00:08:13.606 99.99900% : 31860.578us 00:08:13.606 99.99990% : 31860.578us 00:08:13.606 99.99999% : 31860.578us 00:08:13.606 00:08:13.606 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:13.606 ================================================================================= 00:08:13.606 1.00000% : 9326.277us 00:08:13.606 10.00000% : 10636.997us 00:08:13.606 25.00000% : 11342.769us 00:08:13.606 50.00000% : 12149.366us 00:08:13.606 75.00000% : 14115.446us 00:08:13.606 90.00000% : 15728.640us 00:08:13.606 95.00000% : 16736.886us 00:08:13.606 98.00000% : 17442.658us 00:08:13.606 99.00000% : 18047.606us 00:08:13.606 99.50000% : 22988.012us 00:08:13.606 99.90000% : 23996.258us 00:08:13.606 99.99000% : 24197.908us 00:08:13.606 99.99900% : 24197.908us 00:08:13.606 99.99990% : 24197.908us 00:08:13.606 99.99999% : 24197.908us 00:08:13.606 00:08:13.606 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:13.606 ============================================================================== 00:08:13.606 Range in us Cumulative IO count 00:08:13.606 8519.680 - 8570.092: 0.0101% ( 1) 00:08:13.606 8721.329 - 8771.742: 0.0302% ( 2) 00:08:13.606 8771.742 - 8822.154: 0.0806% ( 5) 00:08:13.607 8822.154 - 8872.566: 0.1310% ( 5) 00:08:13.607 8872.566 - 8922.978: 0.2117% ( 8) 00:08:13.607 8922.978 - 8973.391: 0.2722% ( 6) 00:08:13.607 8973.391 - 9023.803: 0.3125% ( 4) 00:08:13.607 9023.803 - 9074.215: 0.3730% ( 6) 00:08:13.607 9074.215 - 9124.628: 0.4435% ( 7) 00:08:13.607 9124.628 - 9175.040: 0.5544% ( 11) 00:08:13.607 9175.040 - 9225.452: 0.6452% ( 9) 00:08:13.607 9225.452 - 9275.865: 0.7560% ( 11) 00:08:13.607 9275.865 - 9326.277: 0.8569% ( 10) 00:08:13.607 9326.277 - 9376.689: 1.0081% ( 15) 00:08:13.607 9376.689 - 9427.102: 1.1190% ( 11) 00:08:13.607 9427.102 - 9477.514: 1.2097% ( 9) 00:08:13.607 9477.514 - 9527.926: 1.2802% ( 7) 00:08:13.607 9527.926 - 9578.338: 1.3609% ( 8) 00:08:13.607 9578.338 - 9628.751: 1.4415% ( 8) 00:08:13.607 9628.751 - 9679.163: 1.5423% ( 10) 00:08:13.607 9679.163 - 9729.575: 1.6935% ( 15) 00:08:13.607 9729.575 - 9779.988: 1.7742% ( 8) 00:08:13.607 9779.988 - 9830.400: 1.8851% ( 11) 00:08:13.607 9830.400 - 9880.812: 2.0665% ( 18) 00:08:13.607 9880.812 - 9931.225: 2.3387% ( 27) 00:08:13.607 9931.225 - 9981.637: 2.6512% ( 31) 00:08:13.607 9981.637 - 10032.049: 2.9335% ( 28) 00:08:13.607 10032.049 - 10082.462: 3.4375% ( 50) 00:08:13.607 10082.462 - 10132.874: 3.8911% ( 45) 00:08:13.607 10132.874 - 10183.286: 4.1935% ( 30) 00:08:13.607 10183.286 - 10233.698: 4.4859% ( 29) 00:08:13.607 10233.698 - 10284.111: 4.9496% ( 46) 00:08:13.607 10284.111 - 10334.523: 5.3730% ( 42) 00:08:13.607 10334.523 - 10384.935: 5.9375% ( 56) 00:08:13.607 10384.935 - 10435.348: 6.7036% ( 76) 00:08:13.607 10435.348 - 10485.760: 7.3891% ( 68) 00:08:13.607 10485.760 - 10536.172: 8.3871% ( 99) 00:08:13.607 10536.172 - 10586.585: 9.3347% ( 94) 00:08:13.607 10586.585 - 10636.997: 10.6452% ( 130) 00:08:13.607 10636.997 - 10687.409: 11.7238% ( 107) 00:08:13.607 10687.409 - 10737.822: 12.8327% ( 110) 00:08:13.607 10737.822 - 10788.234: 13.8407% ( 100) 00:08:13.607 10788.234 - 10838.646: 14.6774% ( 83) 00:08:13.607 10838.646 - 10889.058: 15.4536% ( 77) 00:08:13.607 10889.058 - 10939.471: 16.3105% ( 85) 00:08:13.607 10939.471 - 10989.883: 17.1774% ( 86) 00:08:13.607 10989.883 - 11040.295: 18.0444% ( 86) 00:08:13.607 11040.295 - 11090.708: 18.8911% ( 84) 00:08:13.607 11090.708 - 11141.120: 19.9496% ( 105) 00:08:13.607 11141.120 - 11191.532: 20.8468% ( 89) 00:08:13.607 11191.532 - 11241.945: 21.8649% ( 101) 00:08:13.607 11241.945 - 11292.357: 22.9133% ( 104) 00:08:13.607 11292.357 - 11342.769: 24.0020% ( 108) 00:08:13.607 11342.769 - 11393.182: 25.0202% ( 101) 00:08:13.607 11393.182 - 11443.594: 26.1694% ( 114) 00:08:13.607 11443.594 - 11494.006: 27.5706% ( 139) 00:08:13.607 11494.006 - 11544.418: 28.8105% ( 123) 00:08:13.607 11544.418 - 11594.831: 29.8790% ( 106) 00:08:13.607 11594.831 - 11645.243: 30.9879% ( 110) 00:08:13.607 11645.243 - 11695.655: 32.3387% ( 134) 00:08:13.607 11695.655 - 11746.068: 33.9718% ( 162) 00:08:13.607 11746.068 - 11796.480: 35.1210% ( 114) 00:08:13.607 11796.480 - 11846.892: 36.2903% ( 116) 00:08:13.607 11846.892 - 11897.305: 37.7722% ( 147) 00:08:13.607 11897.305 - 11947.717: 39.8185% ( 203) 00:08:13.607 11947.717 - 11998.129: 41.8347% ( 200) 00:08:13.607 11998.129 - 12048.542: 43.9315% ( 208) 00:08:13.607 12048.542 - 12098.954: 45.7258% ( 178) 00:08:13.607 12098.954 - 12149.366: 47.6109% ( 187) 00:08:13.607 12149.366 - 12199.778: 49.7782% ( 215) 00:08:13.607 12199.778 - 12250.191: 51.2298% ( 144) 00:08:13.607 12250.191 - 12300.603: 52.6210% ( 138) 00:08:13.607 12300.603 - 12351.015: 53.6391% ( 101) 00:08:13.607 12351.015 - 12401.428: 54.7077% ( 106) 00:08:13.607 12401.428 - 12451.840: 55.8266% ( 111) 00:08:13.607 12451.840 - 12502.252: 56.7742% ( 94) 00:08:13.607 12502.252 - 12552.665: 57.8528% ( 107) 00:08:13.607 12552.665 - 12603.077: 58.7198% ( 86) 00:08:13.607 12603.077 - 12653.489: 59.7177% ( 99) 00:08:13.607 12653.489 - 12703.902: 60.7359% ( 101) 00:08:13.607 12703.902 - 12754.314: 61.7843% ( 104) 00:08:13.607 12754.314 - 12804.726: 62.8730% ( 108) 00:08:13.607 12804.726 - 12855.138: 63.9718% ( 109) 00:08:13.607 12855.138 - 12905.551: 64.9798% ( 100) 00:08:13.607 12905.551 - 13006.375: 67.0665% ( 207) 00:08:13.607 13006.375 - 13107.200: 68.3266% ( 125) 00:08:13.607 13107.200 - 13208.025: 69.4556% ( 112) 00:08:13.607 13208.025 - 13308.849: 70.4637% ( 100) 00:08:13.607 13308.849 - 13409.674: 71.4415% ( 97) 00:08:13.607 13409.674 - 13510.498: 72.2581% ( 81) 00:08:13.607 13510.498 - 13611.323: 73.1956% ( 93) 00:08:13.607 13611.323 - 13712.148: 74.2137% ( 101) 00:08:13.607 13712.148 - 13812.972: 74.9597% ( 74) 00:08:13.607 13812.972 - 13913.797: 75.7359% ( 77) 00:08:13.607 13913.797 - 14014.622: 76.4214% ( 68) 00:08:13.607 14014.622 - 14115.446: 77.4294% ( 100) 00:08:13.607 14115.446 - 14216.271: 78.5181% ( 108) 00:08:13.607 14216.271 - 14317.095: 79.4254% ( 90) 00:08:13.607 14317.095 - 14417.920: 80.3427% ( 91) 00:08:13.607 14417.920 - 14518.745: 81.0081% ( 66) 00:08:13.607 14518.745 - 14619.569: 81.7137% ( 70) 00:08:13.607 14619.569 - 14720.394: 82.3790% ( 66) 00:08:13.607 14720.394 - 14821.218: 83.0242% ( 64) 00:08:13.607 14821.218 - 14922.043: 83.8105% ( 78) 00:08:13.607 14922.043 - 15022.868: 84.4758% ( 66) 00:08:13.607 15022.868 - 15123.692: 85.1008% ( 62) 00:08:13.607 15123.692 - 15224.517: 85.9577% ( 85) 00:08:13.607 15224.517 - 15325.342: 86.7339% ( 77) 00:08:13.607 15325.342 - 15426.166: 87.3992% ( 66) 00:08:13.607 15426.166 - 15526.991: 88.0847% ( 68) 00:08:13.607 15526.991 - 15627.815: 88.6794% ( 59) 00:08:13.607 15627.815 - 15728.640: 89.1734% ( 49) 00:08:13.607 15728.640 - 15829.465: 89.8690% ( 69) 00:08:13.607 15829.465 - 15930.289: 90.2621% ( 39) 00:08:13.607 15930.289 - 16031.114: 90.7863% ( 52) 00:08:13.607 16031.114 - 16131.938: 91.3004% ( 51) 00:08:13.607 16131.938 - 16232.763: 91.8246% ( 52) 00:08:13.607 16232.763 - 16333.588: 92.5302% ( 70) 00:08:13.607 16333.588 - 16434.412: 93.1653% ( 63) 00:08:13.607 16434.412 - 16535.237: 93.9919% ( 82) 00:08:13.607 16535.237 - 16636.062: 94.8185% ( 82) 00:08:13.607 16636.062 - 16736.886: 95.4536% ( 63) 00:08:13.607 16736.886 - 16837.711: 96.0181% ( 56) 00:08:13.607 16837.711 - 16938.535: 96.4113% ( 39) 00:08:13.607 16938.535 - 17039.360: 96.6734% ( 26) 00:08:13.607 17039.360 - 17140.185: 96.9355% ( 26) 00:08:13.607 17140.185 - 17241.009: 97.1774% ( 24) 00:08:13.607 17241.009 - 17341.834: 97.3992% ( 22) 00:08:13.607 17341.834 - 17442.658: 97.6008% ( 20) 00:08:13.607 17442.658 - 17543.483: 97.7823% ( 18) 00:08:13.607 17543.483 - 17644.308: 97.9032% ( 12) 00:08:13.607 17644.308 - 17745.132: 97.9536% ( 5) 00:08:13.607 17745.132 - 17845.957: 97.9940% ( 4) 00:08:13.607 17845.957 - 17946.782: 98.0746% ( 8) 00:08:13.607 17946.782 - 18047.606: 98.1653% ( 9) 00:08:13.607 18047.606 - 18148.431: 98.2460% ( 8) 00:08:13.607 18148.431 - 18249.255: 98.3367% ( 9) 00:08:13.607 18249.255 - 18350.080: 98.3972% ( 6) 00:08:13.607 18350.080 - 18450.905: 98.4476% ( 5) 00:08:13.607 18450.905 - 18551.729: 98.4879% ( 4) 00:08:13.607 18551.729 - 18652.554: 98.5383% ( 5) 00:08:13.607 18652.554 - 18753.378: 98.5988% ( 6) 00:08:13.607 18753.378 - 18854.203: 98.6593% ( 6) 00:08:13.607 18854.203 - 18955.028: 98.7097% ( 5) 00:08:13.607 28029.243 - 28230.892: 98.8004% ( 9) 00:08:13.607 28230.892 - 28432.542: 98.9113% ( 11) 00:08:13.607 28432.542 - 28634.191: 98.9819% ( 7) 00:08:13.607 28634.191 - 28835.840: 99.0625% ( 8) 00:08:13.607 28835.840 - 29037.489: 99.1431% ( 8) 00:08:13.607 29037.489 - 29239.138: 99.2339% ( 9) 00:08:13.607 29239.138 - 29440.788: 99.3246% ( 9) 00:08:13.607 29440.788 - 29642.437: 99.3548% ( 3) 00:08:13.607 34885.317 - 35086.966: 99.3649% ( 1) 00:08:13.607 35086.966 - 35288.615: 99.4556% ( 9) 00:08:13.607 35288.615 - 35490.265: 99.5262% ( 7) 00:08:13.607 35490.265 - 35691.914: 99.6069% ( 8) 00:08:13.607 35691.914 - 35893.563: 99.6976% ( 9) 00:08:13.607 35893.563 - 36095.212: 99.7883% ( 9) 00:08:13.607 36095.212 - 36296.862: 99.8690% ( 8) 00:08:13.607 36296.862 - 36498.511: 99.9496% ( 8) 00:08:13.607 36498.511 - 36700.160: 100.0000% ( 5) 00:08:13.607 00:08:13.607 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:13.607 ============================================================================== 00:08:13.607 Range in us Cumulative IO count 00:08:13.607 8973.391 - 9023.803: 0.0302% ( 3) 00:08:13.607 9023.803 - 9074.215: 0.0907% ( 6) 00:08:13.607 9074.215 - 9124.628: 0.1815% ( 9) 00:08:13.607 9124.628 - 9175.040: 0.2923% ( 11) 00:08:13.607 9175.040 - 9225.452: 0.4738% ( 18) 00:08:13.607 9225.452 - 9275.865: 0.5444% ( 7) 00:08:13.607 9275.865 - 9326.277: 0.6250% ( 8) 00:08:13.607 9326.277 - 9376.689: 0.7762% ( 15) 00:08:13.607 9376.689 - 9427.102: 0.8972% ( 12) 00:08:13.607 9427.102 - 9477.514: 1.0685% ( 17) 00:08:13.607 9477.514 - 9527.926: 1.1996% ( 13) 00:08:13.607 9527.926 - 9578.338: 1.3004% ( 10) 00:08:13.607 9578.338 - 9628.751: 1.5323% ( 23) 00:08:13.607 9628.751 - 9679.163: 1.8347% ( 30) 00:08:13.607 9679.163 - 9729.575: 2.1169% ( 28) 00:08:13.607 9729.575 - 9779.988: 2.4294% ( 31) 00:08:13.607 9779.988 - 9830.400: 2.6915% ( 26) 00:08:13.607 9830.400 - 9880.812: 2.9335% ( 24) 00:08:13.607 9880.812 - 9931.225: 3.3266% ( 39) 00:08:13.607 9931.225 - 9981.637: 3.6290% ( 30) 00:08:13.607 9981.637 - 10032.049: 3.9113% ( 28) 00:08:13.607 10032.049 - 10082.462: 4.1331% ( 22) 00:08:13.607 10082.462 - 10132.874: 4.5060% ( 37) 00:08:13.607 10132.874 - 10183.286: 5.1109% ( 60) 00:08:13.607 10183.286 - 10233.698: 5.6452% ( 53) 00:08:13.607 10233.698 - 10284.111: 6.1492% ( 50) 00:08:13.607 10284.111 - 10334.523: 6.5020% ( 35) 00:08:13.607 10334.523 - 10384.935: 7.0867% ( 58) 00:08:13.607 10384.935 - 10435.348: 7.5403% ( 45) 00:08:13.607 10435.348 - 10485.760: 8.0746% ( 53) 00:08:13.608 10485.760 - 10536.172: 8.8004% ( 72) 00:08:13.608 10536.172 - 10586.585: 9.5665% ( 76) 00:08:13.608 10586.585 - 10636.997: 10.1008% ( 53) 00:08:13.608 10636.997 - 10687.409: 10.8569% ( 75) 00:08:13.608 10687.409 - 10737.822: 11.6431% ( 78) 00:08:13.608 10737.822 - 10788.234: 12.5302% ( 88) 00:08:13.608 10788.234 - 10838.646: 13.3972% ( 86) 00:08:13.608 10838.646 - 10889.058: 14.4355% ( 103) 00:08:13.608 10889.058 - 10939.471: 15.5444% ( 110) 00:08:13.608 10939.471 - 10989.883: 16.5827% ( 103) 00:08:13.608 10989.883 - 11040.295: 17.9032% ( 131) 00:08:13.608 11040.295 - 11090.708: 19.0423% ( 113) 00:08:13.608 11090.708 - 11141.120: 20.7762% ( 172) 00:08:13.608 11141.120 - 11191.532: 22.2177% ( 143) 00:08:13.608 11191.532 - 11241.945: 23.7198% ( 149) 00:08:13.608 11241.945 - 11292.357: 25.0101% ( 128) 00:08:13.608 11292.357 - 11342.769: 26.2097% ( 119) 00:08:13.608 11342.769 - 11393.182: 27.3488% ( 113) 00:08:13.608 11393.182 - 11443.594: 28.1048% ( 75) 00:08:13.608 11443.594 - 11494.006: 29.1835% ( 107) 00:08:13.608 11494.006 - 11544.418: 30.3427% ( 115) 00:08:13.608 11544.418 - 11594.831: 31.3206% ( 97) 00:08:13.608 11594.831 - 11645.243: 32.6411% ( 131) 00:08:13.608 11645.243 - 11695.655: 33.9214% ( 127) 00:08:13.608 11695.655 - 11746.068: 35.4536% ( 152) 00:08:13.608 11746.068 - 11796.480: 36.8448% ( 138) 00:08:13.608 11796.480 - 11846.892: 38.8609% ( 200) 00:08:13.608 11846.892 - 11897.305: 40.2823% ( 141) 00:08:13.608 11897.305 - 11947.717: 41.5423% ( 125) 00:08:13.608 11947.717 - 11998.129: 42.7823% ( 123) 00:08:13.608 11998.129 - 12048.542: 44.1028% ( 131) 00:08:13.608 12048.542 - 12098.954: 45.1008% ( 99) 00:08:13.608 12098.954 - 12149.366: 46.2097% ( 110) 00:08:13.608 12149.366 - 12199.778: 47.4294% ( 121) 00:08:13.608 12199.778 - 12250.191: 48.8407% ( 140) 00:08:13.608 12250.191 - 12300.603: 50.1613% ( 131) 00:08:13.608 12300.603 - 12351.015: 51.5423% ( 137) 00:08:13.608 12351.015 - 12401.428: 52.7923% ( 124) 00:08:13.608 12401.428 - 12451.840: 54.0423% ( 124) 00:08:13.608 12451.840 - 12502.252: 55.2722% ( 122) 00:08:13.608 12502.252 - 12552.665: 56.5121% ( 123) 00:08:13.608 12552.665 - 12603.077: 57.6512% ( 113) 00:08:13.608 12603.077 - 12653.489: 58.9315% ( 127) 00:08:13.608 12653.489 - 12703.902: 59.9496% ( 101) 00:08:13.608 12703.902 - 12754.314: 60.7560% ( 80) 00:08:13.608 12754.314 - 12804.726: 61.7742% ( 101) 00:08:13.608 12804.726 - 12855.138: 62.8327% ( 105) 00:08:13.608 12855.138 - 12905.551: 63.7097% ( 87) 00:08:13.608 12905.551 - 13006.375: 65.4839% ( 176) 00:08:13.608 13006.375 - 13107.200: 67.3992% ( 190) 00:08:13.608 13107.200 - 13208.025: 68.6190% ( 121) 00:08:13.608 13208.025 - 13308.849: 69.8387% ( 121) 00:08:13.608 13308.849 - 13409.674: 71.0181% ( 117) 00:08:13.608 13409.674 - 13510.498: 72.2177% ( 119) 00:08:13.608 13510.498 - 13611.323: 73.2863% ( 106) 00:08:13.608 13611.323 - 13712.148: 74.4556% ( 116) 00:08:13.608 13712.148 - 13812.972: 75.5948% ( 113) 00:08:13.608 13812.972 - 13913.797: 76.4718% ( 87) 00:08:13.608 13913.797 - 14014.622: 77.2984% ( 82) 00:08:13.608 14014.622 - 14115.446: 78.1149% ( 81) 00:08:13.608 14115.446 - 14216.271: 78.9315% ( 81) 00:08:13.608 14216.271 - 14317.095: 79.6069% ( 67) 00:08:13.608 14317.095 - 14417.920: 80.3931% ( 78) 00:08:13.608 14417.920 - 14518.745: 81.1593% ( 76) 00:08:13.608 14518.745 - 14619.569: 81.8246% ( 66) 00:08:13.608 14619.569 - 14720.394: 82.6411% ( 81) 00:08:13.608 14720.394 - 14821.218: 83.3871% ( 74) 00:08:13.608 14821.218 - 14922.043: 83.9617% ( 57) 00:08:13.608 14922.043 - 15022.868: 84.5665% ( 60) 00:08:13.608 15022.868 - 15123.692: 85.1109% ( 54) 00:08:13.608 15123.692 - 15224.517: 85.5847% ( 47) 00:08:13.608 15224.517 - 15325.342: 86.3004% ( 71) 00:08:13.608 15325.342 - 15426.166: 86.9960% ( 69) 00:08:13.608 15426.166 - 15526.991: 87.5907% ( 59) 00:08:13.608 15526.991 - 15627.815: 88.3669% ( 77) 00:08:13.608 15627.815 - 15728.640: 89.1532% ( 78) 00:08:13.608 15728.640 - 15829.465: 89.7782% ( 62) 00:08:13.608 15829.465 - 15930.289: 90.3931% ( 61) 00:08:13.608 15930.289 - 16031.114: 91.1794% ( 78) 00:08:13.608 16031.114 - 16131.938: 91.8750% ( 69) 00:08:13.608 16131.938 - 16232.763: 92.4496% ( 57) 00:08:13.608 16232.763 - 16333.588: 93.0040% ( 55) 00:08:13.608 16333.588 - 16434.412: 93.7097% ( 70) 00:08:13.608 16434.412 - 16535.237: 94.0927% ( 38) 00:08:13.608 16535.237 - 16636.062: 94.4556% ( 36) 00:08:13.608 16636.062 - 16736.886: 95.0202% ( 56) 00:08:13.608 16736.886 - 16837.711: 95.3831% ( 36) 00:08:13.608 16837.711 - 16938.535: 95.6250% ( 24) 00:08:13.608 16938.535 - 17039.360: 95.8468% ( 22) 00:08:13.608 17039.360 - 17140.185: 96.2198% ( 37) 00:08:13.608 17140.185 - 17241.009: 96.7339% ( 51) 00:08:13.608 17241.009 - 17341.834: 96.9254% ( 19) 00:08:13.608 17341.834 - 17442.658: 97.2782% ( 35) 00:08:13.608 17442.658 - 17543.483: 97.5101% ( 23) 00:08:13.608 17543.483 - 17644.308: 97.7823% ( 27) 00:08:13.608 17644.308 - 17745.132: 97.9435% ( 16) 00:08:13.608 17745.132 - 17845.957: 98.1149% ( 17) 00:08:13.608 17845.957 - 17946.782: 98.2359% ( 12) 00:08:13.608 17946.782 - 18047.606: 98.3165% ( 8) 00:08:13.608 18047.606 - 18148.431: 98.4375% ( 12) 00:08:13.608 18148.431 - 18249.255: 98.4980% ( 6) 00:08:13.608 18249.255 - 18350.080: 98.5383% ( 4) 00:08:13.608 18350.080 - 18450.905: 98.5887% ( 5) 00:08:13.608 18450.905 - 18551.729: 98.6290% ( 4) 00:08:13.608 18551.729 - 18652.554: 98.6794% ( 5) 00:08:13.608 18652.554 - 18753.378: 98.7097% ( 3) 00:08:13.608 26416.049 - 26617.698: 98.7399% ( 3) 00:08:13.608 26617.698 - 26819.348: 98.7802% ( 4) 00:08:13.608 26819.348 - 27020.997: 98.8206% ( 4) 00:08:13.608 27020.997 - 27222.646: 98.8609% ( 4) 00:08:13.608 27222.646 - 27424.295: 98.9113% ( 5) 00:08:13.608 27424.295 - 27625.945: 98.9819% ( 7) 00:08:13.608 27625.945 - 27827.594: 99.0524% ( 7) 00:08:13.608 27827.594 - 28029.243: 99.1230% ( 7) 00:08:13.608 28029.243 - 28230.892: 99.1935% ( 7) 00:08:13.608 28230.892 - 28432.542: 99.2742% ( 8) 00:08:13.608 28432.542 - 28634.191: 99.3548% ( 8) 00:08:13.608 33272.123 - 33473.772: 99.3750% ( 2) 00:08:13.608 33473.772 - 33675.422: 99.4556% ( 8) 00:08:13.608 33675.422 - 33877.071: 99.5161% ( 6) 00:08:13.608 33877.071 - 34078.720: 99.5968% ( 8) 00:08:13.608 34078.720 - 34280.369: 99.6774% ( 8) 00:08:13.608 34280.369 - 34482.018: 99.7379% ( 6) 00:08:13.608 34482.018 - 34683.668: 99.8286% ( 9) 00:08:13.608 34683.668 - 34885.317: 99.8790% ( 5) 00:08:13.608 34885.317 - 35086.966: 99.9798% ( 10) 00:08:13.608 35086.966 - 35288.615: 100.0000% ( 2) 00:08:13.608 00:08:13.608 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:13.608 ============================================================================== 00:08:13.608 Range in us Cumulative IO count 00:08:13.608 9124.628 - 9175.040: 0.0101% ( 1) 00:08:13.608 9175.040 - 9225.452: 0.0706% ( 6) 00:08:13.608 9225.452 - 9275.865: 0.1411% ( 7) 00:08:13.608 9275.865 - 9326.277: 0.2319% ( 9) 00:08:13.608 9326.277 - 9376.689: 0.4234% ( 19) 00:08:13.608 9376.689 - 9427.102: 0.6452% ( 22) 00:08:13.608 9427.102 - 9477.514: 0.8669% ( 22) 00:08:13.608 9477.514 - 9527.926: 1.2198% ( 35) 00:08:13.608 9527.926 - 9578.338: 1.5323% ( 31) 00:08:13.608 9578.338 - 9628.751: 1.9052% ( 37) 00:08:13.608 9628.751 - 9679.163: 2.2177% ( 31) 00:08:13.608 9679.163 - 9729.575: 2.4395% ( 22) 00:08:13.608 9729.575 - 9779.988: 2.7117% ( 27) 00:08:13.608 9779.988 - 9830.400: 3.0746% ( 36) 00:08:13.608 9830.400 - 9880.812: 3.2863% ( 21) 00:08:13.608 9880.812 - 9931.225: 3.6694% ( 38) 00:08:13.608 9931.225 - 9981.637: 3.9214% ( 25) 00:08:13.608 9981.637 - 10032.049: 4.3347% ( 41) 00:08:13.608 10032.049 - 10082.462: 4.6472% ( 31) 00:08:13.608 10082.462 - 10132.874: 5.0000% ( 35) 00:08:13.608 10132.874 - 10183.286: 5.2722% ( 27) 00:08:13.608 10183.286 - 10233.698: 5.5746% ( 30) 00:08:13.608 10233.698 - 10284.111: 5.8770% ( 30) 00:08:13.608 10284.111 - 10334.523: 6.1593% ( 28) 00:08:13.608 10334.523 - 10384.935: 6.4516% ( 29) 00:08:13.608 10384.935 - 10435.348: 6.7540% ( 30) 00:08:13.608 10435.348 - 10485.760: 7.1976% ( 44) 00:08:13.608 10485.760 - 10536.172: 7.7419% ( 54) 00:08:13.608 10536.172 - 10586.585: 8.2863% ( 54) 00:08:13.608 10586.585 - 10636.997: 8.9617% ( 67) 00:08:13.608 10636.997 - 10687.409: 9.8790% ( 91) 00:08:13.608 10687.409 - 10737.822: 10.9577% ( 107) 00:08:13.608 10737.822 - 10788.234: 11.9254% ( 96) 00:08:13.608 10788.234 - 10838.646: 12.9335% ( 100) 00:08:13.608 10838.646 - 10889.058: 14.2339% ( 129) 00:08:13.608 10889.058 - 10939.471: 15.4435% ( 120) 00:08:13.608 10939.471 - 10989.883: 16.9153% ( 146) 00:08:13.608 10989.883 - 11040.295: 18.1250% ( 120) 00:08:13.608 11040.295 - 11090.708: 19.5665% ( 143) 00:08:13.608 11090.708 - 11141.120: 20.7460% ( 117) 00:08:13.608 11141.120 - 11191.532: 21.9758% ( 122) 00:08:13.608 11191.532 - 11241.945: 23.2056% ( 122) 00:08:13.608 11241.945 - 11292.357: 24.6573% ( 144) 00:08:13.608 11292.357 - 11342.769: 25.9980% ( 133) 00:08:13.608 11342.769 - 11393.182: 27.2681% ( 126) 00:08:13.608 11393.182 - 11443.594: 28.7802% ( 150) 00:08:13.608 11443.594 - 11494.006: 30.1008% ( 131) 00:08:13.608 11494.006 - 11544.418: 31.1089% ( 100) 00:08:13.608 11544.418 - 11594.831: 32.2480% ( 113) 00:08:13.608 11594.831 - 11645.243: 33.4173% ( 116) 00:08:13.608 11645.243 - 11695.655: 34.6774% ( 125) 00:08:13.608 11695.655 - 11746.068: 35.9577% ( 127) 00:08:13.608 11746.068 - 11796.480: 37.8125% ( 184) 00:08:13.608 11796.480 - 11846.892: 39.3649% ( 154) 00:08:13.608 11846.892 - 11897.305: 40.7056% ( 133) 00:08:13.608 11897.305 - 11947.717: 42.3488% ( 163) 00:08:13.608 11947.717 - 11998.129: 43.8508% ( 149) 00:08:13.608 11998.129 - 12048.542: 45.3024% ( 144) 00:08:13.608 12048.542 - 12098.954: 46.6230% ( 131) 00:08:13.608 12098.954 - 12149.366: 48.1250% ( 149) 00:08:13.608 12149.366 - 12199.778: 49.3750% ( 124) 00:08:13.609 12199.778 - 12250.191: 50.4435% ( 106) 00:08:13.609 12250.191 - 12300.603: 51.5020% ( 105) 00:08:13.609 12300.603 - 12351.015: 52.8327% ( 132) 00:08:13.609 12351.015 - 12401.428: 54.1734% ( 133) 00:08:13.609 12401.428 - 12451.840: 55.2923% ( 111) 00:08:13.609 12451.840 - 12502.252: 56.3206% ( 102) 00:08:13.609 12502.252 - 12552.665: 57.3085% ( 98) 00:08:13.609 12552.665 - 12603.077: 58.2460% ( 93) 00:08:13.609 12603.077 - 12653.489: 59.1431% ( 89) 00:08:13.609 12653.489 - 12703.902: 60.1109% ( 96) 00:08:13.609 12703.902 - 12754.314: 60.8367% ( 72) 00:08:13.609 12754.314 - 12804.726: 61.6935% ( 85) 00:08:13.609 12804.726 - 12855.138: 62.7016% ( 100) 00:08:13.609 12855.138 - 12905.551: 63.5181% ( 81) 00:08:13.609 12905.551 - 13006.375: 64.9798% ( 145) 00:08:13.609 13006.375 - 13107.200: 66.4617% ( 147) 00:08:13.609 13107.200 - 13208.025: 67.7117% ( 124) 00:08:13.609 13208.025 - 13308.849: 69.2339% ( 151) 00:08:13.609 13308.849 - 13409.674: 70.2823% ( 104) 00:08:13.609 13409.674 - 13510.498: 71.3810% ( 109) 00:08:13.609 13510.498 - 13611.323: 72.6008% ( 121) 00:08:13.609 13611.323 - 13712.148: 73.6492% ( 104) 00:08:13.609 13712.148 - 13812.972: 74.6069% ( 95) 00:08:13.609 13812.972 - 13913.797: 75.6452% ( 103) 00:08:13.609 13913.797 - 14014.622: 76.6633% ( 101) 00:08:13.609 14014.622 - 14115.446: 77.9234% ( 125) 00:08:13.609 14115.446 - 14216.271: 79.0927% ( 116) 00:08:13.609 14216.271 - 14317.095: 80.3629% ( 126) 00:08:13.609 14317.095 - 14417.920: 80.9476% ( 58) 00:08:13.609 14417.920 - 14518.745: 81.5726% ( 62) 00:08:13.609 14518.745 - 14619.569: 82.1573% ( 58) 00:08:13.609 14619.569 - 14720.394: 82.7419% ( 58) 00:08:13.609 14720.394 - 14821.218: 83.5081% ( 76) 00:08:13.609 14821.218 - 14922.043: 84.4859% ( 97) 00:08:13.609 14922.043 - 15022.868: 85.2520% ( 76) 00:08:13.609 15022.868 - 15123.692: 85.9577% ( 70) 00:08:13.609 15123.692 - 15224.517: 86.4718% ( 51) 00:08:13.609 15224.517 - 15325.342: 87.0161% ( 54) 00:08:13.609 15325.342 - 15426.166: 87.7117% ( 69) 00:08:13.609 15426.166 - 15526.991: 88.3972% ( 68) 00:08:13.609 15526.991 - 15627.815: 89.3246% ( 92) 00:08:13.609 15627.815 - 15728.640: 89.9698% ( 64) 00:08:13.609 15728.640 - 15829.465: 90.6855% ( 71) 00:08:13.609 15829.465 - 15930.289: 91.1593% ( 47) 00:08:13.609 15930.289 - 16031.114: 91.4718% ( 31) 00:08:13.609 16031.114 - 16131.938: 91.7540% ( 28) 00:08:13.609 16131.938 - 16232.763: 92.0565% ( 30) 00:08:13.609 16232.763 - 16333.588: 92.3790% ( 32) 00:08:13.609 16333.588 - 16434.412: 92.7823% ( 40) 00:08:13.609 16434.412 - 16535.237: 93.1754% ( 39) 00:08:13.609 16535.237 - 16636.062: 93.5786% ( 40) 00:08:13.609 16636.062 - 16736.886: 93.9415% ( 36) 00:08:13.609 16736.886 - 16837.711: 94.4153% ( 47) 00:08:13.609 16837.711 - 16938.535: 94.7984% ( 38) 00:08:13.609 16938.535 - 17039.360: 95.2319% ( 43) 00:08:13.609 17039.360 - 17140.185: 95.6754% ( 44) 00:08:13.609 17140.185 - 17241.009: 96.0181% ( 34) 00:08:13.609 17241.009 - 17341.834: 96.3407% ( 32) 00:08:13.609 17341.834 - 17442.658: 96.7339% ( 39) 00:08:13.609 17442.658 - 17543.483: 97.0565% ( 32) 00:08:13.609 17543.483 - 17644.308: 97.3992% ( 34) 00:08:13.609 17644.308 - 17745.132: 97.7016% ( 30) 00:08:13.609 17745.132 - 17845.957: 97.9637% ( 26) 00:08:13.609 17845.957 - 17946.782: 98.1754% ( 21) 00:08:13.609 17946.782 - 18047.606: 98.3065% ( 13) 00:08:13.609 18047.606 - 18148.431: 98.4274% ( 12) 00:08:13.609 18148.431 - 18249.255: 98.4879% ( 6) 00:08:13.609 18249.255 - 18350.080: 98.5383% ( 5) 00:08:13.609 18350.080 - 18450.905: 98.5988% ( 6) 00:08:13.609 18450.905 - 18551.729: 98.6593% ( 6) 00:08:13.609 18551.729 - 18652.554: 98.7097% ( 5) 00:08:13.609 25105.329 - 25206.154: 98.7500% ( 4) 00:08:13.609 25206.154 - 25306.978: 98.7903% ( 4) 00:08:13.609 25306.978 - 25407.803: 98.8306% ( 4) 00:08:13.609 25407.803 - 25508.628: 98.8710% ( 4) 00:08:13.609 25508.628 - 25609.452: 98.9012% ( 3) 00:08:13.609 25609.452 - 25710.277: 98.9415% ( 4) 00:08:13.609 25710.277 - 25811.102: 98.9819% ( 4) 00:08:13.609 25811.102 - 26012.751: 99.0625% ( 8) 00:08:13.609 26012.751 - 26214.400: 99.1431% ( 8) 00:08:13.609 26214.400 - 26416.049: 99.2238% ( 8) 00:08:13.609 26416.049 - 26617.698: 99.2944% ( 7) 00:08:13.609 26617.698 - 26819.348: 99.3548% ( 6) 00:08:13.609 31860.578 - 32062.228: 99.3952% ( 4) 00:08:13.609 32062.228 - 32263.877: 99.4758% ( 8) 00:08:13.609 32263.877 - 32465.526: 99.5565% ( 8) 00:08:13.609 32465.526 - 32667.175: 99.6371% ( 8) 00:08:13.609 32667.175 - 32868.825: 99.7177% ( 8) 00:08:13.609 32868.825 - 33070.474: 99.7984% ( 8) 00:08:13.609 33070.474 - 33272.123: 99.8790% ( 8) 00:08:13.609 33272.123 - 33473.772: 99.9496% ( 7) 00:08:13.609 33473.772 - 33675.422: 100.0000% ( 5) 00:08:13.609 00:08:13.609 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:13.609 ============================================================================== 00:08:13.609 Range in us Cumulative IO count 00:08:13.609 8670.917 - 8721.329: 0.0101% ( 1) 00:08:13.609 8721.329 - 8771.742: 0.0302% ( 2) 00:08:13.609 8771.742 - 8822.154: 0.0605% ( 3) 00:08:13.609 8822.154 - 8872.566: 0.0907% ( 3) 00:08:13.609 8872.566 - 8922.978: 0.1109% ( 2) 00:08:13.609 8922.978 - 8973.391: 0.1512% ( 4) 00:08:13.609 8973.391 - 9023.803: 0.1815% ( 3) 00:08:13.609 9023.803 - 9074.215: 0.2520% ( 7) 00:08:13.609 9074.215 - 9124.628: 0.3528% ( 10) 00:08:13.609 9124.628 - 9175.040: 0.4940% ( 14) 00:08:13.609 9175.040 - 9225.452: 0.8468% ( 35) 00:08:13.609 9225.452 - 9275.865: 0.9577% ( 11) 00:08:13.609 9275.865 - 9326.277: 1.0585% ( 10) 00:08:13.609 9326.277 - 9376.689: 1.1694% ( 11) 00:08:13.609 9376.689 - 9427.102: 1.3407% ( 17) 00:08:13.609 9427.102 - 9477.514: 1.4415% ( 10) 00:08:13.609 9477.514 - 9527.926: 1.6835% ( 24) 00:08:13.609 9527.926 - 9578.338: 1.9153% ( 23) 00:08:13.609 9578.338 - 9628.751: 2.0665% ( 15) 00:08:13.609 9628.751 - 9679.163: 2.2581% ( 19) 00:08:13.609 9679.163 - 9729.575: 2.4899% ( 23) 00:08:13.609 9729.575 - 9779.988: 2.7319% ( 24) 00:08:13.609 9779.988 - 9830.400: 2.8730% ( 14) 00:08:13.609 9830.400 - 9880.812: 3.1351% ( 26) 00:08:13.609 9880.812 - 9931.225: 3.4577% ( 32) 00:08:13.609 9931.225 - 9981.637: 3.7399% ( 28) 00:08:13.609 9981.637 - 10032.049: 4.1129% ( 37) 00:08:13.609 10032.049 - 10082.462: 4.3548% ( 24) 00:08:13.609 10082.462 - 10132.874: 4.7480% ( 39) 00:08:13.609 10132.874 - 10183.286: 5.1411% ( 39) 00:08:13.609 10183.286 - 10233.698: 5.6149% ( 47) 00:08:13.609 10233.698 - 10284.111: 5.9073% ( 29) 00:08:13.609 10284.111 - 10334.523: 6.3609% ( 45) 00:08:13.609 10334.523 - 10384.935: 6.9758% ( 61) 00:08:13.609 10384.935 - 10435.348: 7.7016% ( 72) 00:08:13.609 10435.348 - 10485.760: 8.3569% ( 65) 00:08:13.609 10485.760 - 10536.172: 8.7601% ( 40) 00:08:13.609 10536.172 - 10586.585: 9.4355% ( 67) 00:08:13.609 10586.585 - 10636.997: 10.0202% ( 58) 00:08:13.609 10636.997 - 10687.409: 10.8367% ( 81) 00:08:13.609 10687.409 - 10737.822: 11.9657% ( 112) 00:08:13.609 10737.822 - 10788.234: 13.3569% ( 138) 00:08:13.609 10788.234 - 10838.646: 14.5060% ( 114) 00:08:13.609 10838.646 - 10889.058: 15.6250% ( 111) 00:08:13.609 10889.058 - 10939.471: 17.3286% ( 169) 00:08:13.609 10939.471 - 10989.883: 18.6492% ( 131) 00:08:13.609 10989.883 - 11040.295: 19.9395% ( 128) 00:08:13.609 11040.295 - 11090.708: 21.0484% ( 110) 00:08:13.609 11090.708 - 11141.120: 22.2278% ( 117) 00:08:13.609 11141.120 - 11191.532: 23.5383% ( 130) 00:08:13.609 11191.532 - 11241.945: 24.7278% ( 118) 00:08:13.609 11241.945 - 11292.357: 26.1190% ( 138) 00:08:13.609 11292.357 - 11342.769: 27.7419% ( 161) 00:08:13.609 11342.769 - 11393.182: 28.8508% ( 110) 00:08:13.609 11393.182 - 11443.594: 30.0101% ( 115) 00:08:13.609 11443.594 - 11494.006: 31.2903% ( 127) 00:08:13.609 11494.006 - 11544.418: 32.4597% ( 116) 00:08:13.609 11544.418 - 11594.831: 33.6794% ( 121) 00:08:13.609 11594.831 - 11645.243: 34.9798% ( 129) 00:08:13.609 11645.243 - 11695.655: 36.2097% ( 122) 00:08:13.609 11695.655 - 11746.068: 37.3992% ( 118) 00:08:13.609 11746.068 - 11796.480: 38.6492% ( 124) 00:08:13.609 11796.480 - 11846.892: 39.7681% ( 111) 00:08:13.609 11846.892 - 11897.305: 40.9274% ( 115) 00:08:13.609 11897.305 - 11947.717: 42.5000% ( 156) 00:08:13.609 11947.717 - 11998.129: 44.3851% ( 187) 00:08:13.609 11998.129 - 12048.542: 45.8871% ( 149) 00:08:13.609 12048.542 - 12098.954: 47.0565% ( 116) 00:08:13.609 12098.954 - 12149.366: 48.5988% ( 153) 00:08:13.609 12149.366 - 12199.778: 49.8589% ( 125) 00:08:13.609 12199.778 - 12250.191: 51.0887% ( 122) 00:08:13.609 12250.191 - 12300.603: 52.1976% ( 110) 00:08:13.609 12300.603 - 12351.015: 53.1956% ( 99) 00:08:13.609 12351.015 - 12401.428: 54.3246% ( 112) 00:08:13.609 12401.428 - 12451.840: 55.4133% ( 108) 00:08:13.609 12451.840 - 12502.252: 56.7339% ( 131) 00:08:13.609 12502.252 - 12552.665: 57.5706% ( 83) 00:08:13.609 12552.665 - 12603.077: 58.6996% ( 112) 00:08:13.609 12603.077 - 12653.489: 59.7278% ( 102) 00:08:13.609 12653.489 - 12703.902: 60.8266% ( 109) 00:08:13.609 12703.902 - 12754.314: 61.8548% ( 102) 00:08:13.609 12754.314 - 12804.726: 62.7923% ( 93) 00:08:13.609 12804.726 - 12855.138: 63.7500% ( 95) 00:08:13.609 12855.138 - 12905.551: 64.5060% ( 75) 00:08:13.609 12905.551 - 13006.375: 65.6552% ( 114) 00:08:13.609 13006.375 - 13107.200: 66.8448% ( 118) 00:08:13.609 13107.200 - 13208.025: 67.7923% ( 94) 00:08:13.609 13208.025 - 13308.849: 68.9819% ( 118) 00:08:13.609 13308.849 - 13409.674: 70.3528% ( 136) 00:08:13.609 13409.674 - 13510.498: 71.1996% ( 84) 00:08:13.609 13510.498 - 13611.323: 71.9355% ( 73) 00:08:13.609 13611.323 - 13712.148: 72.7621% ( 82) 00:08:13.609 13712.148 - 13812.972: 73.6593% ( 89) 00:08:13.609 13812.972 - 13913.797: 74.5262% ( 86) 00:08:13.609 13913.797 - 14014.622: 75.4234% ( 89) 00:08:13.609 14014.622 - 14115.446: 76.3105% ( 88) 00:08:13.610 14115.446 - 14216.271: 77.0161% ( 70) 00:08:13.610 14216.271 - 14317.095: 77.7923% ( 77) 00:08:13.610 14317.095 - 14417.920: 78.9617% ( 116) 00:08:13.610 14417.920 - 14518.745: 79.9899% ( 102) 00:08:13.610 14518.745 - 14619.569: 80.9173% ( 92) 00:08:13.610 14619.569 - 14720.394: 82.2077% ( 128) 00:08:13.610 14720.394 - 14821.218: 83.2964% ( 108) 00:08:13.610 14821.218 - 14922.043: 84.3548% ( 105) 00:08:13.610 14922.043 - 15022.868: 85.4234% ( 106) 00:08:13.610 15022.868 - 15123.692: 86.3609% ( 93) 00:08:13.610 15123.692 - 15224.517: 87.0867% ( 72) 00:08:13.610 15224.517 - 15325.342: 87.8931% ( 80) 00:08:13.610 15325.342 - 15426.166: 88.5887% ( 69) 00:08:13.610 15426.166 - 15526.991: 89.0222% ( 43) 00:08:13.610 15526.991 - 15627.815: 89.3851% ( 36) 00:08:13.610 15627.815 - 15728.640: 89.8085% ( 42) 00:08:13.610 15728.640 - 15829.465: 90.1109% ( 30) 00:08:13.610 15829.465 - 15930.289: 90.6149% ( 50) 00:08:13.610 15930.289 - 16031.114: 90.9980% ( 38) 00:08:13.610 16031.114 - 16131.938: 91.3407% ( 34) 00:08:13.610 16131.938 - 16232.763: 91.7641% ( 42) 00:08:13.610 16232.763 - 16333.588: 92.2984% ( 53) 00:08:13.610 16333.588 - 16434.412: 93.0645% ( 76) 00:08:13.610 16434.412 - 16535.237: 93.4879% ( 42) 00:08:13.610 16535.237 - 16636.062: 93.8810% ( 39) 00:08:13.610 16636.062 - 16736.886: 94.2944% ( 41) 00:08:13.610 16736.886 - 16837.711: 94.6774% ( 38) 00:08:13.610 16837.711 - 16938.535: 95.0101% ( 33) 00:08:13.610 16938.535 - 17039.360: 95.3528% ( 34) 00:08:13.610 17039.360 - 17140.185: 95.8367% ( 48) 00:08:13.610 17140.185 - 17241.009: 96.1593% ( 32) 00:08:13.610 17241.009 - 17341.834: 96.4919% ( 33) 00:08:13.610 17341.834 - 17442.658: 96.6129% ( 12) 00:08:13.610 17442.658 - 17543.483: 96.6734% ( 6) 00:08:13.610 17543.483 - 17644.308: 96.8448% ( 17) 00:08:13.610 17644.308 - 17745.132: 97.0464% ( 20) 00:08:13.610 17745.132 - 17845.957: 97.2782% ( 23) 00:08:13.610 17845.957 - 17946.782: 97.5605% ( 28) 00:08:13.610 17946.782 - 18047.606: 97.7923% ( 23) 00:08:13.610 18047.606 - 18148.431: 98.0343% ( 24) 00:08:13.610 18148.431 - 18249.255: 98.2056% ( 17) 00:08:13.610 18249.255 - 18350.080: 98.3770% ( 17) 00:08:13.610 18350.080 - 18450.905: 98.5383% ( 16) 00:08:13.610 18450.905 - 18551.729: 98.6290% ( 9) 00:08:13.610 18551.729 - 18652.554: 98.6895% ( 6) 00:08:13.610 18652.554 - 18753.378: 98.7097% ( 2) 00:08:13.610 24298.732 - 24399.557: 98.7298% ( 2) 00:08:13.610 24399.557 - 24500.382: 98.7500% ( 2) 00:08:13.610 24500.382 - 24601.206: 98.7702% ( 2) 00:08:13.610 24601.206 - 24702.031: 98.7903% ( 2) 00:08:13.610 24702.031 - 24802.855: 98.8105% ( 2) 00:08:13.610 24802.855 - 24903.680: 98.8407% ( 3) 00:08:13.610 24903.680 - 25004.505: 98.8609% ( 2) 00:08:13.610 25004.505 - 25105.329: 98.8810% ( 2) 00:08:13.610 25105.329 - 25206.154: 98.9012% ( 2) 00:08:13.610 25206.154 - 25306.978: 98.9315% ( 3) 00:08:13.610 25306.978 - 25407.803: 98.9718% ( 4) 00:08:13.610 25407.803 - 25508.628: 99.0121% ( 4) 00:08:13.610 25508.628 - 25609.452: 99.0524% ( 4) 00:08:13.610 25609.452 - 25710.277: 99.0927% ( 4) 00:08:13.610 25710.277 - 25811.102: 99.1331% ( 4) 00:08:13.610 25811.102 - 26012.751: 99.2238% ( 9) 00:08:13.610 26012.751 - 26214.400: 99.3044% ( 8) 00:08:13.610 26214.400 - 26416.049: 99.3548% ( 5) 00:08:13.610 31053.982 - 31255.631: 99.4153% ( 6) 00:08:13.610 31255.631 - 31457.280: 99.4960% ( 8) 00:08:13.610 31457.280 - 31658.929: 99.5766% ( 8) 00:08:13.610 31658.929 - 31860.578: 99.6673% ( 9) 00:08:13.610 31860.578 - 32062.228: 99.7480% ( 8) 00:08:13.610 32062.228 - 32263.877: 99.8286% ( 8) 00:08:13.610 32263.877 - 32465.526: 99.9194% ( 9) 00:08:13.610 32465.526 - 32667.175: 100.0000% ( 8) 00:08:13.610 00:08:13.610 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:13.610 ============================================================================== 00:08:13.610 Range in us Cumulative IO count 00:08:13.610 8973.391 - 9023.803: 0.0202% ( 2) 00:08:13.610 9023.803 - 9074.215: 0.0605% ( 4) 00:08:13.610 9074.215 - 9124.628: 0.1411% ( 8) 00:08:13.610 9124.628 - 9175.040: 0.2319% ( 9) 00:08:13.610 9175.040 - 9225.452: 0.4335% ( 20) 00:08:13.610 9225.452 - 9275.865: 0.6048% ( 17) 00:08:13.610 9275.865 - 9326.277: 0.7863% ( 18) 00:08:13.610 9326.277 - 9376.689: 1.0585% ( 27) 00:08:13.610 9376.689 - 9427.102: 1.3911% ( 33) 00:08:13.610 9427.102 - 9477.514: 1.5927% ( 20) 00:08:13.610 9477.514 - 9527.926: 1.7540% ( 16) 00:08:13.610 9527.926 - 9578.338: 1.9052% ( 15) 00:08:13.610 9578.338 - 9628.751: 2.0262% ( 12) 00:08:13.610 9628.751 - 9679.163: 2.1774% ( 15) 00:08:13.610 9679.163 - 9729.575: 2.3690% ( 19) 00:08:13.610 9729.575 - 9779.988: 2.4294% ( 6) 00:08:13.610 9779.988 - 9830.400: 2.4698% ( 4) 00:08:13.610 9830.400 - 9880.812: 2.5605% ( 9) 00:08:13.610 9880.812 - 9931.225: 2.7117% ( 15) 00:08:13.610 9931.225 - 9981.637: 2.8629% ( 15) 00:08:13.610 9981.637 - 10032.049: 3.1552% ( 29) 00:08:13.610 10032.049 - 10082.462: 3.6190% ( 46) 00:08:13.610 10082.462 - 10132.874: 3.9718% ( 35) 00:08:13.610 10132.874 - 10183.286: 4.3851% ( 41) 00:08:13.610 10183.286 - 10233.698: 4.7883% ( 40) 00:08:13.610 10233.698 - 10284.111: 5.1109% ( 32) 00:08:13.610 10284.111 - 10334.523: 5.5544% ( 44) 00:08:13.610 10334.523 - 10384.935: 6.0181% ( 46) 00:08:13.610 10384.935 - 10435.348: 6.6835% ( 66) 00:08:13.610 10435.348 - 10485.760: 7.2581% ( 57) 00:08:13.610 10485.760 - 10536.172: 7.7823% ( 52) 00:08:13.610 10536.172 - 10586.585: 8.4677% ( 68) 00:08:13.610 10586.585 - 10636.997: 9.2944% ( 82) 00:08:13.610 10636.997 - 10687.409: 10.1411% ( 84) 00:08:13.610 10687.409 - 10737.822: 11.1895% ( 104) 00:08:13.610 10737.822 - 10788.234: 12.2480% ( 105) 00:08:13.610 10788.234 - 10838.646: 13.3770% ( 112) 00:08:13.610 10838.646 - 10889.058: 14.5262% ( 114) 00:08:13.610 10889.058 - 10939.471: 15.7258% ( 119) 00:08:13.610 10939.471 - 10989.883: 17.0867% ( 135) 00:08:13.610 10989.883 - 11040.295: 18.6694% ( 157) 00:08:13.610 11040.295 - 11090.708: 20.1210% ( 144) 00:08:13.610 11090.708 - 11141.120: 21.4819% ( 135) 00:08:13.610 11141.120 - 11191.532: 23.2460% ( 175) 00:08:13.610 11191.532 - 11241.945: 25.2722% ( 201) 00:08:13.610 11241.945 - 11292.357: 26.8851% ( 160) 00:08:13.610 11292.357 - 11342.769: 28.2762% ( 138) 00:08:13.610 11342.769 - 11393.182: 29.5464% ( 126) 00:08:13.610 11393.182 - 11443.594: 31.2298% ( 167) 00:08:13.610 11443.594 - 11494.006: 32.5806% ( 134) 00:08:13.610 11494.006 - 11544.418: 33.9315% ( 134) 00:08:13.610 11544.418 - 11594.831: 35.2621% ( 132) 00:08:13.610 11594.831 - 11645.243: 36.4919% ( 122) 00:08:13.610 11645.243 - 11695.655: 37.8427% ( 134) 00:08:13.610 11695.655 - 11746.068: 39.0927% ( 124) 00:08:13.610 11746.068 - 11796.480: 40.5444% ( 144) 00:08:13.610 11796.480 - 11846.892: 41.8952% ( 134) 00:08:13.610 11846.892 - 11897.305: 43.3569% ( 145) 00:08:13.610 11897.305 - 11947.717: 44.8488% ( 148) 00:08:13.610 11947.717 - 11998.129: 46.4214% ( 156) 00:08:13.610 11998.129 - 12048.542: 48.0040% ( 157) 00:08:13.610 12048.542 - 12098.954: 49.6270% ( 161) 00:08:13.610 12098.954 - 12149.366: 50.9274% ( 129) 00:08:13.610 12149.366 - 12199.778: 52.1371% ( 120) 00:08:13.610 12199.778 - 12250.191: 53.2560% ( 111) 00:08:13.610 12250.191 - 12300.603: 54.3044% ( 104) 00:08:13.610 12300.603 - 12351.015: 55.5645% ( 125) 00:08:13.610 12351.015 - 12401.428: 56.7843% ( 121) 00:08:13.610 12401.428 - 12451.840: 57.8629% ( 107) 00:08:13.610 12451.840 - 12502.252: 58.9516% ( 108) 00:08:13.610 12502.252 - 12552.665: 59.9294% ( 97) 00:08:13.610 12552.665 - 12603.077: 60.7863% ( 85) 00:08:13.610 12603.077 - 12653.489: 61.4415% ( 65) 00:08:13.610 12653.489 - 12703.902: 62.0262% ( 58) 00:08:13.610 12703.902 - 12754.314: 62.5302% ( 50) 00:08:13.610 12754.314 - 12804.726: 63.0242% ( 49) 00:08:13.610 12804.726 - 12855.138: 63.8306% ( 80) 00:08:13.610 12855.138 - 12905.551: 64.3246% ( 49) 00:08:13.610 12905.551 - 13006.375: 65.2823% ( 95) 00:08:13.610 13006.375 - 13107.200: 66.4718% ( 118) 00:08:13.610 13107.200 - 13208.025: 68.1956% ( 171) 00:08:13.610 13208.025 - 13308.849: 69.4254% ( 122) 00:08:13.610 13308.849 - 13409.674: 70.4032% ( 97) 00:08:13.610 13409.674 - 13510.498: 71.2097% ( 80) 00:08:13.610 13510.498 - 13611.323: 71.9960% ( 78) 00:08:13.610 13611.323 - 13712.148: 72.8024% ( 80) 00:08:13.610 13712.148 - 13812.972: 73.4173% ( 61) 00:08:13.610 13812.972 - 13913.797: 73.9214% ( 50) 00:08:13.610 13913.797 - 14014.622: 74.6573% ( 73) 00:08:13.610 14014.622 - 14115.446: 75.4738% ( 81) 00:08:13.610 14115.446 - 14216.271: 76.1290% ( 65) 00:08:13.610 14216.271 - 14317.095: 76.8145% ( 68) 00:08:13.610 14317.095 - 14417.920: 77.5706% ( 75) 00:08:13.610 14417.920 - 14518.745: 78.3367% ( 76) 00:08:13.610 14518.745 - 14619.569: 79.1129% ( 77) 00:08:13.610 14619.569 - 14720.394: 80.2016% ( 108) 00:08:13.610 14720.394 - 14821.218: 81.5323% ( 132) 00:08:13.610 14821.218 - 14922.043: 83.1552% ( 161) 00:08:13.610 14922.043 - 15022.868: 84.5464% ( 138) 00:08:13.610 15022.868 - 15123.692: 85.9476% ( 139) 00:08:13.610 15123.692 - 15224.517: 87.0262% ( 107) 00:08:13.610 15224.517 - 15325.342: 87.8226% ( 79) 00:08:13.610 15325.342 - 15426.166: 88.5484% ( 72) 00:08:13.610 15426.166 - 15526.991: 89.1532% ( 60) 00:08:13.610 15526.991 - 15627.815: 89.6875% ( 53) 00:08:13.610 15627.815 - 15728.640: 90.1915% ( 50) 00:08:13.610 15728.640 - 15829.465: 90.7157% ( 52) 00:08:13.610 15829.465 - 15930.289: 91.1089% ( 39) 00:08:13.610 15930.289 - 16031.114: 91.4415% ( 33) 00:08:13.610 16031.114 - 16131.938: 91.7440% ( 30) 00:08:13.610 16131.938 - 16232.763: 92.0161% ( 27) 00:08:13.610 16232.763 - 16333.588: 92.2278% ( 21) 00:08:13.610 16333.588 - 16434.412: 92.4698% ( 24) 00:08:13.611 16434.412 - 16535.237: 92.7419% ( 27) 00:08:13.611 16535.237 - 16636.062: 93.2560% ( 51) 00:08:13.611 16636.062 - 16736.886: 93.7903% ( 53) 00:08:13.611 16736.886 - 16837.711: 94.3448% ( 55) 00:08:13.611 16837.711 - 16938.535: 94.8085% ( 46) 00:08:13.611 16938.535 - 17039.360: 95.4133% ( 60) 00:08:13.611 17039.360 - 17140.185: 95.8972% ( 48) 00:08:13.611 17140.185 - 17241.009: 96.2702% ( 37) 00:08:13.611 17241.009 - 17341.834: 96.5524% ( 28) 00:08:13.611 17341.834 - 17442.658: 96.9153% ( 36) 00:08:13.611 17442.658 - 17543.483: 97.1976% ( 28) 00:08:13.611 17543.483 - 17644.308: 97.5101% ( 31) 00:08:13.611 17644.308 - 17745.132: 97.7319% ( 22) 00:08:13.611 17745.132 - 17845.957: 97.9032% ( 17) 00:08:13.611 17845.957 - 17946.782: 98.0746% ( 17) 00:08:13.611 17946.782 - 18047.606: 98.2258% ( 15) 00:08:13.611 18047.606 - 18148.431: 98.3367% ( 11) 00:08:13.611 18148.431 - 18249.255: 98.4173% ( 8) 00:08:13.611 18249.255 - 18350.080: 98.4778% ( 6) 00:08:13.611 18350.080 - 18450.905: 98.5383% ( 6) 00:08:13.611 18450.905 - 18551.729: 98.5988% ( 6) 00:08:13.611 18551.729 - 18652.554: 98.6593% ( 6) 00:08:13.611 18652.554 - 18753.378: 98.7097% ( 5) 00:08:13.611 22786.363 - 22887.188: 98.7298% ( 2) 00:08:13.611 22887.188 - 22988.012: 98.7702% ( 4) 00:08:13.611 22988.012 - 23088.837: 98.8105% ( 4) 00:08:13.611 23088.837 - 23189.662: 98.8508% ( 4) 00:08:13.611 23189.662 - 23290.486: 98.8911% ( 4) 00:08:13.611 23290.486 - 23391.311: 98.9315% ( 4) 00:08:13.611 23391.311 - 23492.135: 98.9819% ( 5) 00:08:13.611 23492.135 - 23592.960: 99.0222% ( 4) 00:08:13.611 23592.960 - 23693.785: 99.0625% ( 4) 00:08:13.611 23693.785 - 23794.609: 99.1028% ( 4) 00:08:13.611 23794.609 - 23895.434: 99.1431% ( 4) 00:08:13.611 23895.434 - 23996.258: 99.1734% ( 3) 00:08:13.611 23996.258 - 24097.083: 99.2036% ( 3) 00:08:13.611 24097.083 - 24197.908: 99.2440% ( 4) 00:08:13.611 24197.908 - 24298.732: 99.2843% ( 4) 00:08:13.611 24298.732 - 24399.557: 99.3246% ( 4) 00:08:13.611 24399.557 - 24500.382: 99.3548% ( 3) 00:08:13.611 30045.735 - 30247.385: 99.3952% ( 4) 00:08:13.611 30247.385 - 30449.034: 99.4758% ( 8) 00:08:13.611 30449.034 - 30650.683: 99.5565% ( 8) 00:08:13.611 30650.683 - 30852.332: 99.6270% ( 7) 00:08:13.611 30852.332 - 31053.982: 99.7077% ( 8) 00:08:13.611 31053.982 - 31255.631: 99.7681% ( 6) 00:08:13.611 31255.631 - 31457.280: 99.8488% ( 8) 00:08:13.611 31457.280 - 31658.929: 99.9194% ( 7) 00:08:13.611 31658.929 - 31860.578: 100.0000% ( 8) 00:08:13.611 00:08:13.611 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:13.611 ============================================================================== 00:08:13.611 Range in us Cumulative IO count 00:08:13.611 8620.505 - 8670.917: 0.0200% ( 2) 00:08:13.611 8670.917 - 8721.329: 0.0501% ( 3) 00:08:13.611 8721.329 - 8771.742: 0.0801% ( 3) 00:08:13.611 8771.742 - 8822.154: 0.1102% ( 3) 00:08:13.611 8822.154 - 8872.566: 0.1302% ( 2) 00:08:13.611 8872.566 - 8922.978: 0.1603% ( 3) 00:08:13.611 8922.978 - 8973.391: 0.1903% ( 3) 00:08:13.611 8973.391 - 9023.803: 0.3906% ( 20) 00:08:13.611 9023.803 - 9074.215: 0.5108% ( 12) 00:08:13.611 9074.215 - 9124.628: 0.5909% ( 8) 00:08:13.611 9124.628 - 9175.040: 0.7212% ( 13) 00:08:13.611 9175.040 - 9225.452: 0.8113% ( 9) 00:08:13.611 9225.452 - 9275.865: 0.9515% ( 14) 00:08:13.611 9275.865 - 9326.277: 1.1218% ( 17) 00:08:13.611 9326.277 - 9376.689: 1.2520% ( 13) 00:08:13.611 9376.689 - 9427.102: 1.3622% ( 11) 00:08:13.611 9427.102 - 9477.514: 1.4523% ( 9) 00:08:13.611 9477.514 - 9527.926: 1.6727% ( 22) 00:08:13.611 9527.926 - 9578.338: 1.7929% ( 12) 00:08:13.611 9578.338 - 9628.751: 1.8830% ( 9) 00:08:13.611 9628.751 - 9679.163: 2.0032% ( 12) 00:08:13.611 9679.163 - 9729.575: 2.1134% ( 11) 00:08:13.611 9729.575 - 9779.988: 2.3438% ( 23) 00:08:13.611 9779.988 - 9830.400: 2.5240% ( 18) 00:08:13.611 9830.400 - 9880.812: 2.7544% ( 23) 00:08:13.611 9880.812 - 9931.225: 3.0549% ( 30) 00:08:13.611 9931.225 - 9981.637: 3.2051% ( 15) 00:08:13.611 9981.637 - 10032.049: 3.4054% ( 20) 00:08:13.611 10032.049 - 10082.462: 3.5657% ( 16) 00:08:13.611 10082.462 - 10132.874: 3.8662% ( 30) 00:08:13.611 10132.874 - 10183.286: 4.2268% ( 36) 00:08:13.611 10183.286 - 10233.698: 4.5873% ( 36) 00:08:13.611 10233.698 - 10284.111: 4.9679% ( 38) 00:08:13.611 10284.111 - 10334.523: 5.3185% ( 35) 00:08:13.611 10334.523 - 10384.935: 5.7792% ( 46) 00:08:13.611 10384.935 - 10435.348: 6.3201% ( 54) 00:08:13.611 10435.348 - 10485.760: 7.0212% ( 70) 00:08:13.611 10485.760 - 10536.172: 8.0729% ( 105) 00:08:13.611 10536.172 - 10586.585: 9.1647% ( 109) 00:08:13.611 10586.585 - 10636.997: 10.2965% ( 113) 00:08:13.611 10636.997 - 10687.409: 11.4483% ( 115) 00:08:13.611 10687.409 - 10737.822: 12.6302% ( 118) 00:08:13.611 10737.822 - 10788.234: 14.0425% ( 141) 00:08:13.611 10788.234 - 10838.646: 15.1042% ( 106) 00:08:13.611 10838.646 - 10889.058: 15.9455% ( 84) 00:08:13.611 10889.058 - 10939.471: 16.9371% ( 99) 00:08:13.611 10939.471 - 10989.883: 18.2292% ( 129) 00:08:13.611 10989.883 - 11040.295: 19.3209% ( 109) 00:08:13.611 11040.295 - 11090.708: 20.3926% ( 107) 00:08:13.611 11090.708 - 11141.120: 21.5345% ( 114) 00:08:13.611 11141.120 - 11191.532: 22.5461% ( 101) 00:08:13.611 11191.532 - 11241.945: 23.7480% ( 120) 00:08:13.611 11241.945 - 11292.357: 24.9099% ( 116) 00:08:13.611 11292.357 - 11342.769: 26.6226% ( 171) 00:08:13.611 11342.769 - 11393.182: 28.1550% ( 153) 00:08:13.611 11393.182 - 11443.594: 29.5072% ( 135) 00:08:13.611 11443.594 - 11494.006: 30.8694% ( 136) 00:08:13.611 11494.006 - 11544.418: 32.2616% ( 139) 00:08:13.611 11544.418 - 11594.831: 33.5837% ( 132) 00:08:13.611 11594.831 - 11645.243: 34.7756% ( 119) 00:08:13.611 11645.243 - 11695.655: 36.0276% ( 125) 00:08:13.611 11695.655 - 11746.068: 37.5300% ( 150) 00:08:13.611 11746.068 - 11796.480: 39.0325% ( 150) 00:08:13.611 11796.480 - 11846.892: 40.7652% ( 173) 00:08:13.611 11846.892 - 11897.305: 42.7284% ( 196) 00:08:13.611 11897.305 - 11947.717: 44.3610% ( 163) 00:08:13.611 11947.717 - 11998.129: 46.0637% ( 170) 00:08:13.611 11998.129 - 12048.542: 47.5561% ( 149) 00:08:13.611 12048.542 - 12098.954: 49.0585% ( 150) 00:08:13.611 12098.954 - 12149.366: 50.4407% ( 138) 00:08:13.611 12149.366 - 12199.778: 51.8129% ( 137) 00:08:13.611 12199.778 - 12250.191: 52.8145% ( 100) 00:08:13.611 12250.191 - 12300.603: 53.7760% ( 96) 00:08:13.611 12300.603 - 12351.015: 54.8177% ( 104) 00:08:13.611 12351.015 - 12401.428: 55.8594% ( 104) 00:08:13.611 12401.428 - 12451.840: 56.8610% ( 100) 00:08:13.611 12451.840 - 12502.252: 57.7224% ( 86) 00:08:13.611 12502.252 - 12552.665: 58.4836% ( 76) 00:08:13.611 12552.665 - 12603.077: 59.2548% ( 77) 00:08:13.611 12603.077 - 12653.489: 60.0260% ( 77) 00:08:13.611 12653.489 - 12703.902: 60.7873% ( 76) 00:08:13.611 12703.902 - 12754.314: 61.5485% ( 76) 00:08:13.611 12754.314 - 12804.726: 62.3297% ( 78) 00:08:13.611 12804.726 - 12855.138: 63.2612% ( 93) 00:08:13.611 12855.138 - 12905.551: 64.0425% ( 78) 00:08:13.611 12905.551 - 13006.375: 65.9555% ( 191) 00:08:13.611 13006.375 - 13107.200: 67.4279% ( 147) 00:08:13.611 13107.200 - 13208.025: 69.3910% ( 196) 00:08:13.611 13208.025 - 13308.849: 70.3826% ( 99) 00:08:13.611 13308.849 - 13409.674: 71.0637% ( 68) 00:08:13.611 13409.674 - 13510.498: 71.9651% ( 90) 00:08:13.611 13510.498 - 13611.323: 72.4058% ( 44) 00:08:13.611 13611.323 - 13712.148: 72.7764% ( 37) 00:08:13.611 13712.148 - 13812.972: 73.2873% ( 51) 00:08:13.611 13812.972 - 13913.797: 74.1887% ( 90) 00:08:13.611 13913.797 - 14014.622: 74.8297% ( 64) 00:08:13.611 14014.622 - 14115.446: 75.4607% ( 63) 00:08:13.611 14115.446 - 14216.271: 76.2520% ( 79) 00:08:13.611 14216.271 - 14317.095: 76.9431% ( 69) 00:08:13.611 14317.095 - 14417.920: 78.0248% ( 108) 00:08:13.611 14417.920 - 14518.745: 79.0264% ( 100) 00:08:13.611 14518.745 - 14619.569: 80.1082% ( 108) 00:08:13.611 14619.569 - 14720.394: 80.8994% ( 79) 00:08:13.611 14720.394 - 14821.218: 81.7808% ( 88) 00:08:13.611 14821.218 - 14922.043: 82.7224% ( 94) 00:08:13.611 14922.043 - 15022.868: 83.6939% ( 97) 00:08:13.611 15022.868 - 15123.692: 84.3349% ( 64) 00:08:13.611 15123.692 - 15224.517: 85.1763% ( 84) 00:08:13.611 15224.517 - 15325.342: 85.8373% ( 66) 00:08:13.611 15325.342 - 15426.166: 86.9992% ( 116) 00:08:13.611 15426.166 - 15526.991: 88.0409% ( 104) 00:08:13.611 15526.991 - 15627.815: 89.2328% ( 119) 00:08:13.611 15627.815 - 15728.640: 90.5048% ( 127) 00:08:13.611 15728.640 - 15829.465: 91.4463% ( 94) 00:08:13.611 15829.465 - 15930.289: 92.2075% ( 76) 00:08:13.611 15930.289 - 16031.114: 92.7284% ( 52) 00:08:13.611 16031.114 - 16131.938: 93.1691% ( 44) 00:08:13.611 16131.938 - 16232.763: 93.7400% ( 57) 00:08:13.612 16232.763 - 16333.588: 94.0705% ( 33) 00:08:13.612 16333.588 - 16434.412: 94.3910% ( 32) 00:08:13.612 16434.412 - 16535.237: 94.6815% ( 29) 00:08:13.612 16535.237 - 16636.062: 94.9720% ( 29) 00:08:13.612 16636.062 - 16736.886: 95.3125% ( 34) 00:08:13.612 16736.886 - 16837.711: 95.6731% ( 36) 00:08:13.612 16837.711 - 16938.535: 96.0437% ( 37) 00:08:13.612 16938.535 - 17039.360: 96.4243% ( 38) 00:08:13.612 17039.360 - 17140.185: 96.9351% ( 51) 00:08:13.612 17140.185 - 17241.009: 97.3658% ( 43) 00:08:13.612 17241.009 - 17341.834: 97.7364% ( 37) 00:08:13.612 17341.834 - 17442.658: 98.0268% ( 29) 00:08:13.612 17442.658 - 17543.483: 98.1871% ( 16) 00:08:13.612 17543.483 - 17644.308: 98.3774% ( 19) 00:08:13.612 17644.308 - 17745.132: 98.5777% ( 20) 00:08:13.612 17745.132 - 17845.957: 98.7680% ( 19) 00:08:13.612 17845.957 - 17946.782: 98.9483% ( 18) 00:08:13.612 17946.782 - 18047.606: 99.0785% ( 13) 00:08:13.612 18047.606 - 18148.431: 99.1386% ( 6) 00:08:13.612 18148.431 - 18249.255: 99.1887% ( 5) 00:08:13.612 18249.255 - 18350.080: 99.2388% ( 5) 00:08:13.612 18350.080 - 18450.905: 99.2989% ( 6) 00:08:13.612 18450.905 - 18551.729: 99.3490% ( 5) 00:08:13.612 18551.729 - 18652.554: 99.3590% ( 1) 00:08:13.612 22584.714 - 22685.538: 99.3990% ( 4) 00:08:13.612 22685.538 - 22786.363: 99.4391% ( 4) 00:08:13.612 22786.363 - 22887.188: 99.4792% ( 4) 00:08:13.612 22887.188 - 22988.012: 99.5192% ( 4) 00:08:13.612 22988.012 - 23088.837: 99.5593% ( 4) 00:08:13.612 23088.837 - 23189.662: 99.5994% ( 4) 00:08:13.612 23189.662 - 23290.486: 99.6494% ( 5) 00:08:13.612 23290.486 - 23391.311: 99.6895% ( 4) 00:08:13.612 23391.311 - 23492.135: 99.7296% ( 4) 00:08:13.612 23492.135 - 23592.960: 99.7696% ( 4) 00:08:13.612 23592.960 - 23693.785: 99.8097% ( 4) 00:08:13.612 23693.785 - 23794.609: 99.8498% ( 4) 00:08:13.612 23794.609 - 23895.434: 99.8898% ( 4) 00:08:13.612 23895.434 - 23996.258: 99.9199% ( 3) 00:08:13.612 23996.258 - 24097.083: 99.9599% ( 4) 00:08:13.612 24097.083 - 24197.908: 100.0000% ( 4) 00:08:13.612 00:08:13.612 11:24:12 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:13.612 00:08:13.612 real 0m2.497s 00:08:13.612 user 0m2.191s 00:08:13.612 sys 0m0.198s 00:08:13.612 11:24:12 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.612 11:24:12 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:13.612 ************************************ 00:08:13.612 END TEST nvme_perf 00:08:13.612 ************************************ 00:08:13.612 11:24:12 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:13.612 11:24:12 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:13.612 11:24:12 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.612 11:24:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.612 ************************************ 00:08:13.612 START TEST nvme_hello_world 00:08:13.612 ************************************ 00:08:13.612 11:24:12 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:13.612 Initializing NVMe Controllers 00:08:13.612 Attached to 0000:00:13.0 00:08:13.612 Namespace ID: 1 size: 1GB 00:08:13.612 Attached to 0000:00:10.0 00:08:13.612 Namespace ID: 1 size: 6GB 00:08:13.612 Attached to 0000:00:11.0 00:08:13.612 Namespace ID: 1 size: 5GB 00:08:13.612 Attached to 0000:00:12.0 00:08:13.612 Namespace ID: 1 size: 4GB 00:08:13.612 Namespace ID: 2 size: 4GB 00:08:13.612 Namespace ID: 3 size: 4GB 00:08:13.612 Initialization complete. 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 INFO: using host memory buffer for IO 00:08:13.612 Hello world! 00:08:13.612 00:08:13.612 real 0m0.220s 00:08:13.612 user 0m0.082s 00:08:13.612 sys 0m0.095s 00:08:13.612 11:24:12 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:13.612 ************************************ 00:08:13.612 END TEST nvme_hello_world 00:08:13.612 ************************************ 00:08:13.612 11:24:12 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:13.869 11:24:12 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:13.869 11:24:12 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:13.869 11:24:12 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.869 11:24:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.870 ************************************ 00:08:13.870 START TEST nvme_sgl 00:08:13.870 ************************************ 00:08:13.870 11:24:12 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:13.870 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:13.870 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:13.870 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:13.870 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:13.870 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:13.870 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:14.127 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:14.127 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:14.127 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:14.127 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:14.127 NVMe Readv/Writev Request test 00:08:14.127 Attached to 0000:00:13.0 00:08:14.127 Attached to 0000:00:10.0 00:08:14.127 Attached to 0000:00:11.0 00:08:14.127 Attached to 0000:00:12.0 00:08:14.127 0000:00:10.0: build_io_request_2 test passed 00:08:14.127 0000:00:10.0: build_io_request_4 test passed 00:08:14.127 0000:00:10.0: build_io_request_5 test passed 00:08:14.127 0000:00:10.0: build_io_request_6 test passed 00:08:14.127 0000:00:10.0: build_io_request_7 test passed 00:08:14.127 0000:00:10.0: build_io_request_10 test passed 00:08:14.127 0000:00:11.0: build_io_request_2 test passed 00:08:14.127 0000:00:11.0: build_io_request_4 test passed 00:08:14.127 0000:00:11.0: build_io_request_5 test passed 00:08:14.127 0000:00:11.0: build_io_request_6 test passed 00:08:14.127 0000:00:11.0: build_io_request_7 test passed 00:08:14.127 0000:00:11.0: build_io_request_10 test passed 00:08:14.127 Cleaning up... 00:08:14.127 00:08:14.127 real 0m0.292s 00:08:14.127 user 0m0.151s 00:08:14.128 sys 0m0.098s 00:08:14.128 11:24:13 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.128 11:24:13 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 END TEST nvme_sgl 00:08:14.128 ************************************ 00:08:14.128 11:24:13 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:14.128 11:24:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.128 11:24:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.128 11:24:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.128 ************************************ 00:08:14.128 START TEST nvme_e2edp 00:08:14.128 ************************************ 00:08:14.128 11:24:13 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:14.385 NVMe Write/Read with End-to-End data protection test 00:08:14.385 Attached to 0000:00:13.0 00:08:14.385 Attached to 0000:00:10.0 00:08:14.385 Attached to 0000:00:11.0 00:08:14.385 Attached to 0000:00:12.0 00:08:14.385 Cleaning up... 00:08:14.385 00:08:14.385 real 0m0.210s 00:08:14.385 user 0m0.070s 00:08:14.385 sys 0m0.097s 00:08:14.385 ************************************ 00:08:14.385 END TEST nvme_e2edp 00:08:14.385 ************************************ 00:08:14.385 11:24:13 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.385 11:24:13 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:14.385 11:24:13 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:14.385 11:24:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.385 11:24:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.385 11:24:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.385 ************************************ 00:08:14.385 START TEST nvme_reserve 00:08:14.385 ************************************ 00:08:14.385 11:24:13 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:14.643 ===================================================== 00:08:14.643 NVMe Controller at PCI bus 0, device 19, function 0 00:08:14.643 ===================================================== 00:08:14.643 Reservations: Not Supported 00:08:14.643 ===================================================== 00:08:14.643 NVMe Controller at PCI bus 0, device 16, function 0 00:08:14.643 ===================================================== 00:08:14.643 Reservations: Not Supported 00:08:14.643 ===================================================== 00:08:14.643 NVMe Controller at PCI bus 0, device 17, function 0 00:08:14.643 ===================================================== 00:08:14.643 Reservations: Not Supported 00:08:14.643 ===================================================== 00:08:14.643 NVMe Controller at PCI bus 0, device 18, function 0 00:08:14.643 ===================================================== 00:08:14.643 Reservations: Not Supported 00:08:14.643 Reservation test passed 00:08:14.643 00:08:14.643 real 0m0.211s 00:08:14.643 user 0m0.081s 00:08:14.643 sys 0m0.089s 00:08:14.643 ************************************ 00:08:14.643 END TEST nvme_reserve 00:08:14.643 ************************************ 00:08:14.643 11:24:13 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.643 11:24:13 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:14.643 11:24:13 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:14.643 11:24:13 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:14.643 11:24:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.643 11:24:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.643 ************************************ 00:08:14.643 START TEST nvme_err_injection 00:08:14.643 ************************************ 00:08:14.643 11:24:13 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:14.902 NVMe Error Injection test 00:08:14.902 Attached to 0000:00:13.0 00:08:14.902 Attached to 0000:00:10.0 00:08:14.902 Attached to 0000:00:11.0 00:08:14.902 Attached to 0000:00:12.0 00:08:14.902 0000:00:13.0: get features failed as expected 00:08:14.902 0000:00:10.0: get features failed as expected 00:08:14.902 0000:00:11.0: get features failed as expected 00:08:14.902 0000:00:12.0: get features failed as expected 00:08:14.902 0000:00:13.0: get features successfully as expected 00:08:14.902 0000:00:10.0: get features successfully as expected 00:08:14.902 0000:00:11.0: get features successfully as expected 00:08:14.902 0000:00:12.0: get features successfully as expected 00:08:14.902 0000:00:13.0: read failed as expected 00:08:14.902 0000:00:11.0: read failed as expected 00:08:14.902 0000:00:12.0: read failed as expected 00:08:14.902 0000:00:10.0: read failed as expected 00:08:14.902 0000:00:13.0: read successfully as expected 00:08:14.902 0000:00:10.0: read successfully as expected 00:08:14.902 0000:00:11.0: read successfully as expected 00:08:14.902 0000:00:12.0: read successfully as expected 00:08:14.902 Cleaning up... 00:08:14.902 ************************************ 00:08:14.902 END TEST nvme_err_injection 00:08:14.902 ************************************ 00:08:14.902 00:08:14.902 real 0m0.221s 00:08:14.902 user 0m0.090s 00:08:14.902 sys 0m0.090s 00:08:14.902 11:24:13 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:14.902 11:24:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:14.902 11:24:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:14.902 11:24:13 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:08:14.902 11:24:13 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:14.902 11:24:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.902 ************************************ 00:08:14.902 START TEST nvme_overhead 00:08:14.902 ************************************ 00:08:14.902 11:24:13 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:16.276 Initializing NVMe Controllers 00:08:16.276 Attached to 0000:00:13.0 00:08:16.276 Attached to 0000:00:10.0 00:08:16.276 Attached to 0000:00:11.0 00:08:16.276 Attached to 0000:00:12.0 00:08:16.276 Initialization complete. Launching workers. 00:08:16.276 submit (in ns) avg, min, max = 11347.7, 9903.1, 81770.8 00:08:16.276 complete (in ns) avg, min, max = 7547.4, 7148.5, 318882.3 00:08:16.276 00:08:16.276 Submit histogram 00:08:16.276 ================ 00:08:16.276 Range in us Cumulative Count 00:08:16.276 9.895 - 9.945: 0.0061% ( 1) 00:08:16.276 9.945 - 9.994: 0.0123% ( 1) 00:08:16.276 10.043 - 10.092: 0.0184% ( 1) 00:08:16.276 10.142 - 10.191: 0.0246% ( 1) 00:08:16.276 10.191 - 10.240: 0.0369% ( 2) 00:08:16.276 10.338 - 10.388: 0.0430% ( 1) 00:08:16.276 10.437 - 10.486: 0.0553% ( 2) 00:08:16.276 10.585 - 10.634: 0.0615% ( 1) 00:08:16.276 10.683 - 10.732: 0.0738% ( 2) 00:08:16.276 10.732 - 10.782: 0.2336% ( 26) 00:08:16.276 10.782 - 10.831: 1.2355% ( 163) 00:08:16.276 10.831 - 10.880: 4.6407% ( 554) 00:08:16.276 10.880 - 10.929: 12.9572% ( 1353) 00:08:16.276 10.929 - 10.978: 25.8160% ( 2092) 00:08:16.276 10.978 - 11.028: 40.8200% ( 2441) 00:08:16.276 11.028 - 11.077: 54.6438% ( 2249) 00:08:16.276 11.077 - 11.126: 64.6014% ( 1620) 00:08:16.276 11.126 - 11.175: 71.2767% ( 1086) 00:08:16.276 11.175 - 11.225: 75.1122% ( 624) 00:08:16.276 11.225 - 11.274: 77.4909% ( 387) 00:08:16.276 11.274 - 11.323: 79.1259% ( 266) 00:08:16.276 11.323 - 11.372: 80.2938% ( 190) 00:08:16.276 11.372 - 11.422: 81.2404% ( 154) 00:08:16.276 11.422 - 11.471: 82.1440% ( 147) 00:08:16.276 11.471 - 11.520: 82.9369% ( 129) 00:08:16.276 11.520 - 11.569: 83.6069% ( 109) 00:08:16.276 11.569 - 11.618: 84.3568% ( 122) 00:08:16.276 11.618 - 11.668: 85.1988% ( 137) 00:08:16.276 11.668 - 11.717: 85.9426% ( 121) 00:08:16.276 11.717 - 11.766: 86.7478% ( 131) 00:08:16.276 11.766 - 11.815: 87.6022% ( 139) 00:08:16.276 11.815 - 11.865: 88.4074% ( 131) 00:08:16.276 11.865 - 11.914: 89.2065% ( 130) 00:08:16.276 11.914 - 11.963: 89.9748% ( 125) 00:08:16.276 11.963 - 12.012: 91.0935% ( 182) 00:08:16.276 12.012 - 12.062: 92.0462% ( 155) 00:08:16.276 12.062 - 12.111: 93.0789% ( 168) 00:08:16.276 12.111 - 12.160: 93.9763% ( 146) 00:08:16.276 12.160 - 12.209: 94.8245% ( 138) 00:08:16.276 12.209 - 12.258: 95.4699% ( 105) 00:08:16.276 12.258 - 12.308: 95.8940% ( 69) 00:08:16.276 12.308 - 12.357: 96.1829% ( 47) 00:08:16.276 12.357 - 12.406: 96.3735% ( 31) 00:08:16.276 12.406 - 12.455: 96.4841% ( 18) 00:08:16.276 12.455 - 12.505: 96.5640% ( 13) 00:08:16.276 12.505 - 12.554: 96.6439% ( 13) 00:08:16.276 12.554 - 12.603: 96.7054% ( 10) 00:08:16.276 12.603 - 12.702: 96.8099% ( 17) 00:08:16.276 12.702 - 12.800: 96.8775% ( 11) 00:08:16.276 12.800 - 12.898: 96.9758% ( 16) 00:08:16.276 12.898 - 12.997: 97.0558% ( 13) 00:08:16.276 12.997 - 13.095: 97.2156% ( 26) 00:08:16.276 13.095 - 13.194: 97.3385% ( 20) 00:08:16.276 13.194 - 13.292: 97.4491% ( 18) 00:08:16.276 13.292 - 13.391: 97.5598% ( 18) 00:08:16.276 13.391 - 13.489: 97.6335% ( 12) 00:08:16.276 13.489 - 13.588: 97.6581% ( 4) 00:08:16.276 13.588 - 13.686: 97.7565% ( 16) 00:08:16.276 13.686 - 13.785: 97.8179% ( 10) 00:08:16.276 13.785 - 13.883: 97.8733% ( 9) 00:08:16.276 13.883 - 13.982: 97.9224% ( 8) 00:08:16.276 13.982 - 14.080: 97.9716% ( 8) 00:08:16.276 14.080 - 14.178: 97.9962% ( 4) 00:08:16.276 14.178 - 14.277: 98.0392% ( 7) 00:08:16.276 14.277 - 14.375: 98.0699% ( 5) 00:08:16.276 14.375 - 14.474: 98.0761% ( 1) 00:08:16.276 14.474 - 14.572: 98.1130% ( 6) 00:08:16.276 14.572 - 14.671: 98.1437% ( 5) 00:08:16.276 14.671 - 14.769: 98.1867% ( 7) 00:08:16.276 14.769 - 14.868: 98.2543% ( 11) 00:08:16.277 14.868 - 14.966: 98.2851% ( 5) 00:08:16.277 14.966 - 15.065: 98.3158% ( 5) 00:08:16.277 15.065 - 15.163: 98.3650% ( 8) 00:08:16.277 15.163 - 15.262: 98.4019% ( 6) 00:08:16.277 15.262 - 15.360: 98.4449% ( 7) 00:08:16.277 15.360 - 15.458: 98.4879% ( 7) 00:08:16.277 15.458 - 15.557: 98.5248% ( 6) 00:08:16.277 15.557 - 15.655: 98.5494% ( 4) 00:08:16.277 15.655 - 15.754: 98.5678% ( 3) 00:08:16.277 15.754 - 15.852: 98.6109% ( 7) 00:08:16.277 15.852 - 15.951: 98.6416% ( 5) 00:08:16.277 15.951 - 16.049: 98.6600% ( 3) 00:08:16.277 16.049 - 16.148: 98.6723% ( 2) 00:08:16.277 16.148 - 16.246: 98.6785% ( 1) 00:08:16.277 16.345 - 16.443: 98.7031% ( 4) 00:08:16.277 16.443 - 16.542: 98.7276% ( 4) 00:08:16.277 16.542 - 16.640: 98.7953% ( 11) 00:08:16.277 16.640 - 16.738: 98.8690% ( 12) 00:08:16.277 16.738 - 16.837: 98.8936% ( 4) 00:08:16.277 16.837 - 16.935: 98.9305% ( 6) 00:08:16.277 16.935 - 17.034: 98.9858% ( 9) 00:08:16.277 17.034 - 17.132: 99.0411% ( 9) 00:08:16.277 17.132 - 17.231: 99.1026% ( 10) 00:08:16.277 17.231 - 17.329: 99.1518% ( 8) 00:08:16.277 17.329 - 17.428: 99.1948% ( 7) 00:08:16.277 17.428 - 17.526: 99.2747% ( 13) 00:08:16.277 17.526 - 17.625: 99.3423% ( 11) 00:08:16.277 17.625 - 17.723: 99.3976% ( 9) 00:08:16.277 17.723 - 17.822: 99.4591% ( 10) 00:08:16.277 17.822 - 17.920: 99.4898% ( 5) 00:08:16.277 17.920 - 18.018: 99.5329% ( 7) 00:08:16.277 18.018 - 18.117: 99.5697% ( 6) 00:08:16.277 18.117 - 18.215: 99.5882% ( 3) 00:08:16.277 18.215 - 18.314: 99.6251% ( 6) 00:08:16.277 18.314 - 18.412: 99.6373% ( 2) 00:08:16.277 18.412 - 18.511: 99.6496% ( 2) 00:08:16.277 18.609 - 18.708: 99.6681% ( 3) 00:08:16.277 18.708 - 18.806: 99.6742% ( 1) 00:08:16.277 18.806 - 18.905: 99.6865% ( 2) 00:08:16.277 19.003 - 19.102: 99.6927% ( 1) 00:08:16.277 19.102 - 19.200: 99.7111% ( 3) 00:08:16.277 19.298 - 19.397: 99.7173% ( 1) 00:08:16.277 19.397 - 19.495: 99.7295% ( 2) 00:08:16.277 19.495 - 19.594: 99.7480% ( 3) 00:08:16.277 20.086 - 20.185: 99.7541% ( 1) 00:08:16.277 20.185 - 20.283: 99.7664% ( 2) 00:08:16.277 20.283 - 20.382: 99.7726% ( 1) 00:08:16.277 20.382 - 20.480: 99.7787% ( 1) 00:08:16.277 20.775 - 20.874: 99.7849% ( 1) 00:08:16.277 20.874 - 20.972: 99.8033% ( 3) 00:08:16.277 21.169 - 21.268: 99.8095% ( 1) 00:08:16.277 21.563 - 21.662: 99.8217% ( 2) 00:08:16.277 21.662 - 21.760: 99.8279% ( 1) 00:08:16.277 21.760 - 21.858: 99.8340% ( 1) 00:08:16.277 22.055 - 22.154: 99.8463% ( 2) 00:08:16.277 22.154 - 22.252: 99.8525% ( 1) 00:08:16.277 22.252 - 22.351: 99.8586% ( 1) 00:08:16.277 22.449 - 22.548: 99.8648% ( 1) 00:08:16.277 22.646 - 22.745: 99.8709% ( 1) 00:08:16.277 22.942 - 23.040: 99.8771% ( 1) 00:08:16.277 23.138 - 23.237: 99.8832% ( 1) 00:08:16.277 23.828 - 23.926: 99.8894% ( 1) 00:08:16.277 24.123 - 24.222: 99.8955% ( 1) 00:08:16.277 25.009 - 25.108: 99.9017% ( 1) 00:08:16.277 26.585 - 26.782: 99.9078% ( 1) 00:08:16.277 27.569 - 27.766: 99.9201% ( 2) 00:08:16.277 30.720 - 30.917: 99.9262% ( 1) 00:08:16.277 32.295 - 32.492: 99.9324% ( 1) 00:08:16.277 32.689 - 32.886: 99.9385% ( 1) 00:08:16.277 35.052 - 35.249: 99.9447% ( 1) 00:08:16.277 40.172 - 40.369: 99.9508% ( 1) 00:08:16.277 43.520 - 43.717: 99.9570% ( 1) 00:08:16.277 44.702 - 44.898: 99.9631% ( 1) 00:08:16.277 47.655 - 47.852: 99.9693% ( 1) 00:08:16.277 59.865 - 60.258: 99.9754% ( 1) 00:08:16.277 62.622 - 63.015: 99.9816% ( 1) 00:08:16.277 66.954 - 67.348: 99.9877% ( 1) 00:08:16.277 72.074 - 72.468: 99.9939% ( 1) 00:08:16.277 81.526 - 81.920: 100.0000% ( 1) 00:08:16.277 00:08:16.277 Complete histogram 00:08:16.277 ================== 00:08:16.277 Range in us Cumulative Count 00:08:16.277 7.138 - 7.188: 0.2643% ( 43) 00:08:16.277 7.188 - 7.237: 3.0487% ( 453) 00:08:16.277 7.237 - 7.286: 12.8773% ( 1599) 00:08:16.277 7.286 - 7.335: 33.4194% ( 3342) 00:08:16.277 7.335 - 7.385: 56.9242% ( 3824) 00:08:16.277 7.385 - 7.434: 73.8644% ( 2756) 00:08:16.277 7.434 - 7.483: 83.7114% ( 1602) 00:08:16.277 7.483 - 7.532: 89.2618% ( 903) 00:08:16.277 7.532 - 7.582: 92.5380% ( 533) 00:08:16.277 7.582 - 7.631: 94.1976% ( 270) 00:08:16.277 7.631 - 7.680: 95.1564% ( 156) 00:08:16.277 7.680 - 7.729: 95.5314% ( 61) 00:08:16.277 7.729 - 7.778: 95.7219% ( 31) 00:08:16.277 7.778 - 7.828: 95.8387% ( 19) 00:08:16.277 7.828 - 7.877: 95.9494% ( 18) 00:08:16.277 7.877 - 7.926: 96.0293% ( 13) 00:08:16.277 7.926 - 7.975: 96.1092% ( 13) 00:08:16.277 7.975 - 8.025: 96.1338% ( 4) 00:08:16.277 8.025 - 8.074: 96.2014% ( 11) 00:08:16.277 8.074 - 8.123: 96.2997% ( 16) 00:08:16.277 8.123 - 8.172: 96.3858% ( 14) 00:08:16.277 8.172 - 8.222: 96.5026% ( 19) 00:08:16.277 8.222 - 8.271: 96.7177% ( 35) 00:08:16.277 8.271 - 8.320: 96.9636% ( 40) 00:08:16.277 8.320 - 8.369: 97.2217% ( 42) 00:08:16.277 8.369 - 8.418: 97.4430% ( 36) 00:08:16.277 8.418 - 8.468: 97.5782% ( 22) 00:08:16.277 8.468 - 8.517: 97.6827% ( 17) 00:08:16.277 8.517 - 8.566: 97.7257% ( 7) 00:08:16.277 8.566 - 8.615: 97.7688% ( 7) 00:08:16.277 8.615 - 8.665: 97.7995% ( 5) 00:08:16.277 8.665 - 8.714: 97.8118% ( 2) 00:08:16.277 8.714 - 8.763: 97.8425% ( 5) 00:08:16.277 8.763 - 8.812: 97.8548% ( 2) 00:08:16.277 8.812 - 8.862: 97.8671% ( 2) 00:08:16.277 8.911 - 8.960: 97.8733% ( 1) 00:08:16.277 8.960 - 9.009: 97.8855% ( 2) 00:08:16.277 9.108 - 9.157: 97.8917% ( 1) 00:08:16.277 9.157 - 9.206: 97.8978% ( 1) 00:08:16.277 9.206 - 9.255: 97.9040% ( 1) 00:08:16.277 9.255 - 9.305: 97.9101% ( 1) 00:08:16.277 9.305 - 9.354: 97.9224% ( 2) 00:08:16.277 9.354 - 9.403: 97.9286% ( 1) 00:08:16.277 9.403 - 9.452: 97.9347% ( 1) 00:08:16.277 9.502 - 9.551: 97.9409% ( 1) 00:08:16.277 9.551 - 9.600: 97.9532% ( 2) 00:08:16.277 9.600 - 9.649: 97.9777% ( 4) 00:08:16.277 9.649 - 9.698: 97.9900% ( 2) 00:08:16.277 9.698 - 9.748: 98.0023% ( 2) 00:08:16.277 9.748 - 9.797: 98.0208% ( 3) 00:08:16.277 9.797 - 9.846: 98.0392% ( 3) 00:08:16.277 9.846 - 9.895: 98.0454% ( 1) 00:08:16.277 9.895 - 9.945: 98.0822% ( 6) 00:08:16.277 9.945 - 9.994: 98.1130% ( 5) 00:08:16.277 9.994 - 10.043: 98.1376% ( 4) 00:08:16.277 10.043 - 10.092: 98.1499% ( 2) 00:08:16.277 10.092 - 10.142: 98.1621% ( 2) 00:08:16.277 10.191 - 10.240: 98.1683% ( 1) 00:08:16.277 10.240 - 10.289: 98.1990% ( 5) 00:08:16.277 10.289 - 10.338: 98.2113% ( 2) 00:08:16.277 10.338 - 10.388: 98.2236% ( 2) 00:08:16.277 10.388 - 10.437: 98.2482% ( 4) 00:08:16.277 10.437 - 10.486: 98.2728% ( 4) 00:08:16.277 10.486 - 10.535: 98.2789% ( 1) 00:08:16.277 10.535 - 10.585: 98.2851% ( 1) 00:08:16.277 10.585 - 10.634: 98.2912% ( 1) 00:08:16.277 10.634 - 10.683: 98.2974% ( 1) 00:08:16.277 10.683 - 10.732: 98.3158% ( 3) 00:08:16.277 10.732 - 10.782: 98.3220% ( 1) 00:08:16.277 10.782 - 10.831: 98.3281% ( 1) 00:08:16.277 10.831 - 10.880: 98.3527% ( 4) 00:08:16.277 10.880 - 10.929: 98.3588% ( 1) 00:08:16.277 10.929 - 10.978: 98.3650% ( 1) 00:08:16.277 11.028 - 11.077: 98.3773% ( 2) 00:08:16.277 11.077 - 11.126: 98.3896% ( 2) 00:08:16.277 11.126 - 11.175: 98.3957% ( 1) 00:08:16.277 11.175 - 11.225: 98.4019% ( 1) 00:08:16.277 11.225 - 11.274: 98.4142% ( 2) 00:08:16.277 11.323 - 11.372: 98.4203% ( 1) 00:08:16.277 11.471 - 11.520: 98.4326% ( 2) 00:08:16.277 11.520 - 11.569: 98.4449% ( 2) 00:08:16.277 11.766 - 11.815: 98.4510% ( 1) 00:08:16.277 11.963 - 12.012: 98.4572% ( 1) 00:08:16.277 12.062 - 12.111: 98.4633% ( 1) 00:08:16.277 12.258 - 12.308: 98.4695% ( 1) 00:08:16.277 12.357 - 12.406: 98.4756% ( 1) 00:08:16.277 12.455 - 12.505: 98.4818% ( 1) 00:08:16.277 12.702 - 12.800: 98.4879% ( 1) 00:08:16.277 12.800 - 12.898: 98.5555% ( 11) 00:08:16.277 12.898 - 12.997: 98.6539% ( 16) 00:08:16.277 12.997 - 13.095: 98.7092% ( 9) 00:08:16.277 13.095 - 13.194: 98.7645% ( 9) 00:08:16.277 13.194 - 13.292: 98.8260% ( 10) 00:08:16.277 13.292 - 13.391: 98.9059% ( 13) 00:08:16.277 13.391 - 13.489: 98.9674% ( 10) 00:08:16.277 13.489 - 13.588: 99.1087% ( 23) 00:08:16.277 13.588 - 13.686: 99.1763% ( 11) 00:08:16.277 13.686 - 13.785: 99.2501% ( 12) 00:08:16.277 13.785 - 13.883: 99.3239% ( 12) 00:08:16.277 13.883 - 13.982: 99.3853% ( 10) 00:08:16.277 13.982 - 14.080: 99.4284% ( 7) 00:08:16.277 14.080 - 14.178: 99.5206% ( 15) 00:08:16.277 14.178 - 14.277: 99.5574% ( 6) 00:08:16.277 14.277 - 14.375: 99.5820% ( 4) 00:08:16.277 14.375 - 14.474: 99.5943% ( 2) 00:08:16.277 14.474 - 14.572: 99.6312% ( 6) 00:08:16.278 14.572 - 14.671: 99.6558% ( 4) 00:08:16.278 14.671 - 14.769: 99.6619% ( 1) 00:08:16.278 14.769 - 14.868: 99.6681% ( 1) 00:08:16.278 14.868 - 14.966: 99.6742% ( 1) 00:08:16.278 14.966 - 15.065: 99.6804% ( 1) 00:08:16.278 15.065 - 15.163: 99.6927% ( 2) 00:08:16.278 15.163 - 15.262: 99.7050% ( 2) 00:08:16.278 15.262 - 15.360: 99.7111% ( 1) 00:08:16.278 15.360 - 15.458: 99.7173% ( 1) 00:08:16.278 15.852 - 15.951: 99.7234% ( 1) 00:08:16.278 15.951 - 16.049: 99.7295% ( 1) 00:08:16.278 16.148 - 16.246: 99.7357% ( 1) 00:08:16.278 16.246 - 16.345: 99.7418% ( 1) 00:08:16.278 16.345 - 16.443: 99.7480% ( 1) 00:08:16.278 16.443 - 16.542: 99.7541% ( 1) 00:08:16.278 16.542 - 16.640: 99.7726% ( 3) 00:08:16.278 16.640 - 16.738: 99.7787% ( 1) 00:08:16.278 16.935 - 17.034: 99.7910% ( 2) 00:08:16.278 17.034 - 17.132: 99.7972% ( 1) 00:08:16.278 17.822 - 17.920: 99.8095% ( 2) 00:08:16.278 18.018 - 18.117: 99.8156% ( 1) 00:08:16.278 18.117 - 18.215: 99.8217% ( 1) 00:08:16.278 18.314 - 18.412: 99.8279% ( 1) 00:08:16.278 18.511 - 18.609: 99.8340% ( 1) 00:08:16.278 18.609 - 18.708: 99.8525% ( 3) 00:08:16.278 18.708 - 18.806: 99.8586% ( 1) 00:08:16.278 18.806 - 18.905: 99.8648% ( 1) 00:08:16.278 19.003 - 19.102: 99.8709% ( 1) 00:08:16.278 19.200 - 19.298: 99.8771% ( 1) 00:08:16.278 19.298 - 19.397: 99.8832% ( 1) 00:08:16.278 19.594 - 19.692: 99.8894% ( 1) 00:08:16.278 20.086 - 20.185: 99.8955% ( 1) 00:08:16.278 20.283 - 20.382: 99.9017% ( 1) 00:08:16.278 20.480 - 20.578: 99.9078% ( 1) 00:08:16.278 20.578 - 20.677: 99.9139% ( 1) 00:08:16.278 20.874 - 20.972: 99.9201% ( 1) 00:08:16.278 21.169 - 21.268: 99.9262% ( 1) 00:08:16.278 21.366 - 21.465: 99.9324% ( 1) 00:08:16.278 21.563 - 21.662: 99.9385% ( 1) 00:08:16.278 22.843 - 22.942: 99.9447% ( 1) 00:08:16.278 23.532 - 23.631: 99.9508% ( 1) 00:08:16.278 23.926 - 24.025: 99.9570% ( 1) 00:08:16.278 24.123 - 24.222: 99.9631% ( 1) 00:08:16.278 24.517 - 24.615: 99.9693% ( 1) 00:08:16.278 25.797 - 25.994: 99.9754% ( 1) 00:08:16.278 28.554 - 28.751: 99.9816% ( 1) 00:08:16.278 40.369 - 40.566: 99.9877% ( 1) 00:08:16.278 41.157 - 41.354: 99.9939% ( 1) 00:08:16.278 318.228 - 319.803: 100.0000% ( 1) 00:08:16.278 00:08:16.278 ************************************ 00:08:16.278 END TEST nvme_overhead 00:08:16.278 ************************************ 00:08:16.278 00:08:16.278 real 0m1.220s 00:08:16.278 user 0m1.078s 00:08:16.278 sys 0m0.092s 00:08:16.278 11:24:15 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:16.278 11:24:15 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:16.278 11:24:15 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:16.278 11:24:15 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:16.278 11:24:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:16.278 11:24:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:16.278 ************************************ 00:08:16.278 START TEST nvme_arbitration 00:08:16.278 ************************************ 00:08:16.278 11:24:15 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:19.557 Initializing NVMe Controllers 00:08:19.557 Attached to 0000:00:13.0 00:08:19.557 Attached to 0000:00:10.0 00:08:19.557 Attached to 0000:00:11.0 00:08:19.557 Attached to 0000:00:12.0 00:08:19.557 Associating QEMU NVMe Ctrl (12343 ) with lcore 0 00:08:19.557 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:08:19.557 Associating QEMU NVMe Ctrl (12341 ) with lcore 2 00:08:19.557 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:19.557 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:19.557 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:19.557 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:19.557 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:19.557 Initialization complete. Launching workers. 00:08:19.557 Starting thread on core 1 with urgent priority queue 00:08:19.557 Starting thread on core 2 with urgent priority queue 00:08:19.557 Starting thread on core 3 with urgent priority queue 00:08:19.557 Starting thread on core 0 with urgent priority queue 00:08:19.557 QEMU NVMe Ctrl (12343 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:19.557 QEMU NVMe Ctrl (12342 ) core 0: 874.67 IO/s 114.33 secs/100000 ios 00:08:19.557 QEMU NVMe Ctrl (12340 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:19.557 QEMU NVMe Ctrl (12342 ) core 1: 896.00 IO/s 111.61 secs/100000 ios 00:08:19.557 QEMU NVMe Ctrl (12341 ) core 2: 960.00 IO/s 104.17 secs/100000 ios 00:08:19.557 QEMU NVMe Ctrl (12342 ) core 3: 938.67 IO/s 106.53 secs/100000 ios 00:08:19.557 ======================================================== 00:08:19.557 00:08:19.557 ************************************ 00:08:19.557 END TEST nvme_arbitration 00:08:19.557 ************************************ 00:08:19.557 00:08:19.557 real 0m3.303s 00:08:19.557 user 0m9.205s 00:08:19.557 sys 0m0.117s 00:08:19.557 11:24:18 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.557 11:24:18 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:19.557 11:24:18 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:19.557 11:24:18 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:19.557 11:24:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.557 11:24:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.557 ************************************ 00:08:19.557 START TEST nvme_single_aen 00:08:19.557 ************************************ 00:08:19.557 11:24:18 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:19.557 Asynchronous Event Request test 00:08:19.557 Attached to 0000:00:13.0 00:08:19.557 Attached to 0000:00:10.0 00:08:19.557 Attached to 0000:00:11.0 00:08:19.557 Attached to 0000:00:12.0 00:08:19.557 Reset controller to setup AER completions for this process 00:08:19.557 Registering asynchronous event callbacks... 00:08:19.557 Getting orig temperature thresholds of all controllers 00:08:19.557 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:19.557 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:19.557 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:19.557 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:19.557 Setting all controllers temperature threshold low to trigger AER 00:08:19.557 Waiting for all controllers temperature threshold to be set lower 00:08:19.557 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:19.557 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:19.557 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:19.557 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:19.557 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:19.557 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:19.557 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:19.557 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:19.557 Waiting for all controllers to trigger AER and reset threshold 00:08:19.557 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.557 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.557 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.557 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:19.557 Cleaning up... 00:08:19.557 00:08:19.557 real 0m0.219s 00:08:19.557 user 0m0.076s 00:08:19.557 sys 0m0.099s 00:08:19.557 11:24:18 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:19.816 ************************************ 00:08:19.816 END TEST nvme_single_aen 00:08:19.816 ************************************ 00:08:19.816 11:24:18 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:19.816 11:24:18 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:19.816 11:24:18 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:19.816 11:24:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:19.816 11:24:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:19.816 ************************************ 00:08:19.816 START TEST nvme_doorbell_aers 00:08:19.816 ************************************ 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:19.816 11:24:18 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:20.074 [2024-11-05 11:24:19.127569] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:08:30.036 Executing: test_write_invalid_db 00:08:30.036 Waiting for AER completion... 00:08:30.036 Failure: test_write_invalid_db 00:08:30.036 00:08:30.036 Executing: test_invalid_db_write_overflow_sq 00:08:30.036 Waiting for AER completion... 00:08:30.036 Failure: test_invalid_db_write_overflow_sq 00:08:30.036 00:08:30.036 Executing: test_invalid_db_write_overflow_cq 00:08:30.036 Waiting for AER completion... 00:08:30.036 Failure: test_invalid_db_write_overflow_cq 00:08:30.036 00:08:30.036 11:24:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:30.036 11:24:28 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:30.036 [2024-11-05 11:24:29.169767] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:08:39.999 Executing: test_write_invalid_db 00:08:39.999 Waiting for AER completion... 00:08:39.999 Failure: test_write_invalid_db 00:08:39.999 00:08:39.999 Executing: test_invalid_db_write_overflow_sq 00:08:39.999 Waiting for AER completion... 00:08:39.999 Failure: test_invalid_db_write_overflow_sq 00:08:39.999 00:08:39.999 Executing: test_invalid_db_write_overflow_cq 00:08:39.999 Waiting for AER completion... 00:08:39.999 Failure: test_invalid_db_write_overflow_cq 00:08:39.999 00:08:39.999 11:24:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:39.999 11:24:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:39.999 [2024-11-05 11:24:39.198931] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:08:49.977 Executing: test_write_invalid_db 00:08:49.977 Waiting for AER completion... 00:08:49.977 Failure: test_write_invalid_db 00:08:49.977 00:08:49.977 Executing: test_invalid_db_write_overflow_sq 00:08:49.977 Waiting for AER completion... 00:08:49.977 Failure: test_invalid_db_write_overflow_sq 00:08:49.977 00:08:49.977 Executing: test_invalid_db_write_overflow_cq 00:08:49.977 Waiting for AER completion... 00:08:49.977 Failure: test_invalid_db_write_overflow_cq 00:08:49.977 00:08:49.977 11:24:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:49.977 11:24:49 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:49.977 [2024-11-05 11:24:49.228770] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:08:59.945 Executing: test_write_invalid_db 00:08:59.945 Waiting for AER completion... 00:08:59.945 Failure: test_write_invalid_db 00:08:59.945 00:08:59.945 Executing: test_invalid_db_write_overflow_sq 00:08:59.945 Waiting for AER completion... 00:08:59.945 Failure: test_invalid_db_write_overflow_sq 00:08:59.945 00:08:59.945 Executing: test_invalid_db_write_overflow_cq 00:08:59.945 Waiting for AER completion... 00:08:59.945 Failure: test_invalid_db_write_overflow_cq 00:08:59.945 00:08:59.945 00:08:59.945 real 0m40.191s 00:08:59.945 user 0m34.167s 00:08:59.945 sys 0m5.677s 00:08:59.945 11:24:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:59.945 11:24:59 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:59.945 ************************************ 00:08:59.945 END TEST nvme_doorbell_aers 00:08:59.945 ************************************ 00:08:59.945 11:24:59 nvme -- nvme/nvme.sh@97 -- # uname 00:08:59.945 11:24:59 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:59.945 11:24:59 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:59.945 11:24:59 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:59.945 11:24:59 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:59.945 11:24:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.945 ************************************ 00:08:59.945 START TEST nvme_multi_aen 00:08:59.945 ************************************ 00:08:59.945 11:24:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:09:00.202 [2024-11-05 11:24:59.275517] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.275679] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.275740] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.277099] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.277210] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.277270] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.278287] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.278373] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.278384] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.279323] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.279345] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 [2024-11-05 11:24:59.279352] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63136) is not found. Dropping the request. 00:09:00.202 Child process pid: 63662 00:09:00.461 [Child] Asynchronous Event Request test 00:09:00.461 [Child] Attached to 0000:00:13.0 00:09:00.461 [Child] Attached to 0000:00:10.0 00:09:00.461 [Child] Attached to 0000:00:11.0 00:09:00.461 [Child] Attached to 0000:00:12.0 00:09:00.461 [Child] Registering asynchronous event callbacks... 00:09:00.461 [Child] Getting orig temperature thresholds of all controllers 00:09:00.461 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 [Child] Waiting for all controllers to trigger AER and reset threshold 00:09:00.461 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 [Child] Cleaning up... 00:09:00.461 Asynchronous Event Request test 00:09:00.461 Attached to 0000:00:13.0 00:09:00.461 Attached to 0000:00:10.0 00:09:00.461 Attached to 0000:00:11.0 00:09:00.461 Attached to 0000:00:12.0 00:09:00.461 Reset controller to setup AER completions for this process 00:09:00.461 Registering asynchronous event callbacks... 00:09:00.461 Getting orig temperature thresholds of all controllers 00:09:00.461 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:00.461 Setting all controllers temperature threshold low to trigger AER 00:09:00.461 Waiting for all controllers temperature threshold to be set lower 00:09:00.461 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:00.461 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:00.461 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:00.461 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:00.461 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:00.461 Waiting for all controllers to trigger AER and reset threshold 00:09:00.461 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:00.461 Cleaning up... 00:09:00.462 00:09:00.462 real 0m0.432s 00:09:00.462 user 0m0.145s 00:09:00.462 sys 0m0.179s 00:09:00.462 11:24:59 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.462 11:24:59 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:09:00.462 ************************************ 00:09:00.462 END TEST nvme_multi_aen 00:09:00.462 ************************************ 00:09:00.462 11:24:59 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:00.462 11:24:59 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:00.462 11:24:59 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.462 11:24:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.462 ************************************ 00:09:00.462 START TEST nvme_startup 00:09:00.462 ************************************ 00:09:00.462 11:24:59 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:09:00.719 Initializing NVMe Controllers 00:09:00.719 Attached to 0000:00:13.0 00:09:00.719 Attached to 0000:00:10.0 00:09:00.719 Attached to 0000:00:11.0 00:09:00.719 Attached to 0000:00:12.0 00:09:00.719 Initialization complete. 00:09:00.719 Time used:151899.125 (us). 00:09:00.719 00:09:00.719 real 0m0.215s 00:09:00.719 user 0m0.073s 00:09:00.719 sys 0m0.097s 00:09:00.719 11:24:59 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.719 11:24:59 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:09:00.719 ************************************ 00:09:00.719 END TEST nvme_startup 00:09:00.719 ************************************ 00:09:00.719 11:24:59 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:09:00.719 11:24:59 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:00.719 11:24:59 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:00.719 11:24:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:00.719 ************************************ 00:09:00.720 START TEST nvme_multi_secondary 00:09:00.720 ************************************ 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=63718 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=63719 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:09:00.720 11:24:59 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:04.006 Initializing NVMe Controllers 00:09:04.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:04.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:04.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:04.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:04.006 Initialization complete. Launching workers. 00:09:04.006 ======================================================== 00:09:04.006 Latency(us) 00:09:04.006 Device Information : IOPS MiB/s Average min max 00:09:04.006 PCIE (0000:00:13.0) NSID 1 from core 1: 7743.88 30.25 2065.72 730.96 9372.18 00:09:04.006 PCIE (0000:00:10.0) NSID 1 from core 1: 7743.88 30.25 2064.91 710.35 8564.41 00:09:04.006 PCIE (0000:00:11.0) NSID 1 from core 1: 7743.88 30.25 2065.84 733.63 8097.41 00:09:04.006 PCIE (0000:00:12.0) NSID 1 from core 1: 7743.88 30.25 2065.93 745.64 9300.31 00:09:04.006 PCIE (0000:00:12.0) NSID 2 from core 1: 7743.88 30.25 2065.97 732.45 9635.07 00:09:04.006 PCIE (0000:00:12.0) NSID 3 from core 1: 7743.88 30.25 2066.03 731.51 7918.94 00:09:04.006 ======================================================== 00:09:04.006 Total : 46463.29 181.50 2065.73 710.35 9635.07 00:09:04.006 00:09:04.006 Initializing NVMe Controllers 00:09:04.006 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:04.006 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:04.006 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:04.006 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:04.006 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:04.006 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:04.006 Initialization complete. Launching workers. 00:09:04.006 ======================================================== 00:09:04.006 Latency(us) 00:09:04.006 Device Information : IOPS MiB/s Average min max 00:09:04.006 PCIE (0000:00:13.0) NSID 1 from core 2: 3182.82 12.43 5026.61 1064.41 16278.01 00:09:04.006 PCIE (0000:00:10.0) NSID 1 from core 2: 3182.82 12.43 5025.09 1114.47 19124.92 00:09:04.006 PCIE (0000:00:11.0) NSID 1 from core 2: 3182.82 12.43 5026.86 1200.76 19465.75 00:09:04.006 PCIE (0000:00:12.0) NSID 1 from core 2: 3182.82 12.43 5026.40 1192.76 18114.09 00:09:04.006 PCIE (0000:00:12.0) NSID 2 from core 2: 3182.82 12.43 5027.04 1126.90 15345.84 00:09:04.006 PCIE (0000:00:12.0) NSID 3 from core 2: 3182.82 12.43 5025.49 1060.90 14900.88 00:09:04.006 ======================================================== 00:09:04.006 Total : 19096.95 74.60 5026.25 1060.90 19465.75 00:09:04.006 00:09:04.006 11:25:03 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 63718 00:09:06.542 Initializing NVMe Controllers 00:09:06.542 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.542 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:06.542 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:06.542 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.542 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:06.542 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:06.542 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:06.542 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:06.542 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:06.542 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:06.542 Initialization complete. Launching workers. 00:09:06.542 ======================================================== 00:09:06.542 Latency(us) 00:09:06.542 Device Information : IOPS MiB/s Average min max 00:09:06.542 PCIE (0000:00:13.0) NSID 1 from core 0: 11108.23 43.39 1440.01 681.23 7461.64 00:09:06.542 PCIE (0000:00:10.0) NSID 1 from core 0: 11108.23 43.39 1439.17 667.61 7034.12 00:09:06.542 PCIE (0000:00:11.0) NSID 1 from core 0: 11108.23 43.39 1439.95 682.86 6871.86 00:09:06.542 PCIE (0000:00:12.0) NSID 1 from core 0: 11108.23 43.39 1439.93 684.54 7074.76 00:09:06.542 PCIE (0000:00:12.0) NSID 2 from core 0: 11108.23 43.39 1439.90 681.38 8996.75 00:09:06.542 PCIE (0000:00:12.0) NSID 3 from core 0: 11108.23 43.39 1439.88 610.64 7643.16 00:09:06.542 ======================================================== 00:09:06.542 Total : 66649.38 260.35 1439.81 610.64 8996.75 00:09:06.542 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 63719 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=63788 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=63789 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:06.542 11:25:05 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:09.849 Initializing NVMe Controllers 00:09:09.849 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.849 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:09.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:09.849 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:09.849 Initialization complete. Launching workers. 00:09:09.849 ======================================================== 00:09:09.849 Latency(us) 00:09:09.849 Device Information : IOPS MiB/s Average min max 00:09:09.849 PCIE (0000:00:13.0) NSID 1 from core 1: 7295.14 28.50 2192.80 736.96 11441.53 00:09:09.849 PCIE (0000:00:10.0) NSID 1 from core 1: 7295.14 28.50 2191.93 727.59 11551.59 00:09:09.849 PCIE (0000:00:11.0) NSID 1 from core 1: 7295.14 28.50 2192.84 747.40 11735.12 00:09:09.849 PCIE (0000:00:12.0) NSID 1 from core 1: 7295.14 28.50 2192.81 749.20 12463.51 00:09:09.849 PCIE (0000:00:12.0) NSID 2 from core 1: 7295.14 28.50 2192.80 744.32 10602.30 00:09:09.849 PCIE (0000:00:12.0) NSID 3 from core 1: 7295.14 28.50 2192.77 752.22 11829.68 00:09:09.849 ======================================================== 00:09:09.849 Total : 43770.86 170.98 2192.66 727.59 12463.51 00:09:09.849 00:09:09.849 Initializing NVMe Controllers 00:09:09.849 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:09.849 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:09.849 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:09.849 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:09.849 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:09.849 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:09.849 Initialization complete. Launching workers. 00:09:09.849 ======================================================== 00:09:09.849 Latency(us) 00:09:09.849 Device Information : IOPS MiB/s Average min max 00:09:09.849 PCIE (0000:00:13.0) NSID 1 from core 0: 7544.94 29.47 2120.20 732.33 10030.73 00:09:09.849 PCIE (0000:00:10.0) NSID 1 from core 0: 7544.94 29.47 2119.20 701.19 9778.49 00:09:09.849 PCIE (0000:00:11.0) NSID 1 from core 0: 7544.94 29.47 2120.07 728.85 11454.20 00:09:09.849 PCIE (0000:00:12.0) NSID 1 from core 0: 7544.94 29.47 2120.00 606.29 12326.43 00:09:09.849 PCIE (0000:00:12.0) NSID 2 from core 0: 7544.94 29.47 2119.94 583.76 9933.21 00:09:09.849 PCIE (0000:00:12.0) NSID 3 from core 0: 7544.94 29.47 2119.89 572.23 9708.50 00:09:09.849 ======================================================== 00:09:09.849 Total : 45269.66 176.83 2119.88 572.23 12326.43 00:09:09.849 00:09:11.750 Initializing NVMe Controllers 00:09:11.750 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:11.750 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:11.750 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:11.750 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:11.750 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:11.750 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:11.750 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:11.750 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:11.750 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:11.750 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:11.750 Initialization complete. Launching workers. 00:09:11.750 ======================================================== 00:09:11.750 Latency(us) 00:09:11.750 Device Information : IOPS MiB/s Average min max 00:09:11.750 PCIE (0000:00:13.0) NSID 1 from core 2: 4215.56 16.47 3794.85 760.96 27706.19 00:09:11.750 PCIE (0000:00:10.0) NSID 1 from core 2: 4215.56 16.47 3793.11 724.06 27199.15 00:09:11.750 PCIE (0000:00:11.0) NSID 1 from core 2: 4215.56 16.47 3795.16 718.95 25972.38 00:09:11.750 PCIE (0000:00:12.0) NSID 1 from core 2: 4215.56 16.47 3795.47 762.71 23132.84 00:09:11.750 PCIE (0000:00:12.0) NSID 2 from core 2: 4215.56 16.47 3795.23 754.45 27005.64 00:09:11.750 PCIE (0000:00:12.0) NSID 3 from core 2: 4215.56 16.47 3795.35 751.58 26372.63 00:09:11.750 ======================================================== 00:09:11.750 Total : 25293.37 98.80 3794.86 718.95 27706.19 00:09:11.750 00:09:11.750 ************************************ 00:09:11.750 END TEST nvme_multi_secondary 00:09:11.750 ************************************ 00:09:11.750 11:25:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 63788 00:09:11.750 11:25:10 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 63789 00:09:11.750 00:09:11.750 real 0m10.775s 00:09:11.750 user 0m18.383s 00:09:11.750 sys 0m0.612s 00:09:11.750 11:25:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.750 11:25:10 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:11.750 11:25:10 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:11.750 11:25:10 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:11.750 11:25:10 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/62745 ]] 00:09:11.750 11:25:10 nvme -- common/autotest_common.sh@1092 -- # kill 62745 00:09:11.750 11:25:10 nvme -- common/autotest_common.sh@1093 -- # wait 62745 00:09:11.750 [2024-11-05 11:25:10.639073] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.639159] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.639196] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.639220] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.643286] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.643646] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.643775] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.643897] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.647251] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.647586] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.647864] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.648204] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.750 [2024-11-05 11:25:10.651190] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.751 [2024-11-05 11:25:10.651349] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.751 [2024-11-05 11:25:10.651465] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.751 [2024-11-05 11:25:10.651530] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63661) is not found. Dropping the request. 00:09:11.751 11:25:10 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:09:11.751 11:25:10 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:09:11.751 11:25:10 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:11.751 11:25:10 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:11.751 11:25:10 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.751 11:25:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.751 ************************************ 00:09:11.751 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:11.751 ************************************ 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:11.751 * Looking for test storage... 00:09:11.751 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.751 --rc genhtml_branch_coverage=1 00:09:11.751 --rc genhtml_function_coverage=1 00:09:11.751 --rc genhtml_legend=1 00:09:11.751 --rc geninfo_all_blocks=1 00:09:11.751 --rc geninfo_unexecuted_blocks=1 00:09:11.751 00:09:11.751 ' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.751 --rc genhtml_branch_coverage=1 00:09:11.751 --rc genhtml_function_coverage=1 00:09:11.751 --rc genhtml_legend=1 00:09:11.751 --rc geninfo_all_blocks=1 00:09:11.751 --rc geninfo_unexecuted_blocks=1 00:09:11.751 00:09:11.751 ' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.751 --rc genhtml_branch_coverage=1 00:09:11.751 --rc genhtml_function_coverage=1 00:09:11.751 --rc genhtml_legend=1 00:09:11.751 --rc geninfo_all_blocks=1 00:09:11.751 --rc geninfo_unexecuted_blocks=1 00:09:11.751 00:09:11.751 ' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:11.751 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.751 --rc genhtml_branch_coverage=1 00:09:11.751 --rc genhtml_function_coverage=1 00:09:11.751 --rc genhtml_legend=1 00:09:11.751 --rc geninfo_all_blocks=1 00:09:11.751 --rc geninfo_unexecuted_blocks=1 00:09:11.751 00:09:11.751 ' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:11.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=63945 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 63945 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 63945 ']' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:11.751 11:25:10 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:12.009 [2024-11-05 11:25:11.059016] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:09:12.009 [2024-11-05 11:25:11.059129] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63945 ] 00:09:12.009 [2024-11-05 11:25:11.229502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:12.268 [2024-11-05 11:25:11.330902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.268 [2024-11-05 11:25:11.331212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:12.268 [2024-11-05 11:25:11.332063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:12.268 [2024-11-05 11:25:11.332142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:12.834 nvme0n1 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_wN3WK.txt 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:12.834 true 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730805911 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=63968 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:12.834 11:25:11 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:14.733 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:14.733 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.733 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:14.733 [2024-11-05 11:25:14.008300] nvme_ctrlr.c:1701:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:09:14.733 [2024-11-05 11:25:14.008638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:14.733 [2024-11-05 11:25:14.008679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:14.733 [2024-11-05 11:25:14.008693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:14.992 [2024-11-05 11:25:14.010349] bdev_nvme.c:2236:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:09:14.992 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 63968 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 63968 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 63968 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=3 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_wN3WK.txt 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_wN3WK.txt 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 63945 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 63945 ']' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 63945 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63945 00:09:14.992 killing process with pid 63945 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63945' 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 63945 00:09:14.992 11:25:14 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 63945 00:09:16.430 11:25:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:16.430 11:25:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:16.430 ************************************ 00:09:16.430 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:16.430 ************************************ 00:09:16.430 00:09:16.430 real 0m4.691s 00:09:16.430 user 0m16.704s 00:09:16.430 sys 0m0.486s 00:09:16.430 11:25:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.430 11:25:15 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 11:25:15 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:16.430 11:25:15 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:16.430 11:25:15 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:16.430 11:25:15 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.430 11:25:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.430 ************************************ 00:09:16.430 START TEST nvme_fio 00:09:16.430 ************************************ 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:09:16.430 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:16.430 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:16.430 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:16.430 11:25:15 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:16.430 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:16.430 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:16.431 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:16.431 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:16.431 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:16.689 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:16.689 11:25:15 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:16.947 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:16.947 11:25:16 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:16.947 11:25:16 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:17.206 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:17.206 fio-3.35 00:09:17.206 Starting 1 thread 00:09:23.770 00:09:23.770 test: (groupid=0, jobs=1): err= 0: pid=64112: Tue Nov 5 11:25:21 2024 00:09:23.770 read: IOPS=22.3k, BW=87.2MiB/s (91.5MB/s)(175MiB/2001msec) 00:09:23.770 slat (nsec): min=4244, max=71236, avg=5089.78, stdev=2165.54 00:09:23.770 clat (usec): min=267, max=9867, avg=2855.37, stdev=869.85 00:09:23.770 lat (usec): min=271, max=9904, avg=2860.46, stdev=870.79 00:09:23.770 clat percentiles (usec): 00:09:23.770 | 1.00th=[ 1762], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:23.770 | 30.00th=[ 2409], 40.00th=[ 2474], 50.00th=[ 2540], 60.00th=[ 2671], 00:09:23.770 | 70.00th=[ 2835], 80.00th=[ 3097], 90.00th=[ 4047], 95.00th=[ 4817], 00:09:23.770 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 8455], 99.95th=[ 8979], 00:09:23.770 | 99.99th=[ 9765] 00:09:23.770 bw ( KiB/s): min=84264, max=98064, per=100.00%, avg=91208.00, stdev=6900.42, samples=3 00:09:23.770 iops : min=21066, max=24516, avg=22802.00, stdev=1725.11, samples=3 00:09:23.770 write: IOPS=22.2k, BW=86.6MiB/s (90.8MB/s)(173MiB/2001msec); 0 zone resets 00:09:23.770 slat (nsec): min=4308, max=73212, avg=5227.66, stdev=2089.06 00:09:23.770 clat (usec): min=223, max=9811, avg=2876.42, stdev=878.32 00:09:23.770 lat (usec): min=227, max=9823, avg=2881.64, stdev=879.20 00:09:23.770 clat percentiles (usec): 00:09:23.770 | 1.00th=[ 1762], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:23.770 | 30.00th=[ 2442], 40.00th=[ 2474], 50.00th=[ 2573], 60.00th=[ 2671], 00:09:23.770 | 70.00th=[ 2835], 80.00th=[ 3130], 90.00th=[ 4080], 95.00th=[ 4883], 00:09:23.770 | 99.00th=[ 6194], 99.50th=[ 6521], 99.90th=[ 8586], 99.95th=[ 9110], 00:09:23.770 | 99.99th=[ 9634] 00:09:23.770 bw ( KiB/s): min=85664, max=97504, per=100.00%, avg=91376.00, stdev=5930.95, samples=3 00:09:23.770 iops : min=21416, max=24376, avg=22844.00, stdev=1482.74, samples=3 00:09:23.770 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.01% 00:09:23.770 lat (msec) : 2=2.73%, 4=86.84%, 10=10.38% 00:09:23.770 cpu : usr=99.20%, sys=0.05%, ctx=19, majf=0, minf=608 00:09:23.770 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:23.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:23.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:23.770 issued rwts: total=44677,44382,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:23.770 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:23.770 00:09:23.770 Run status group 0 (all jobs): 00:09:23.770 READ: bw=87.2MiB/s (91.5MB/s), 87.2MiB/s-87.2MiB/s (91.5MB/s-91.5MB/s), io=175MiB (183MB), run=2001-2001msec 00:09:23.770 WRITE: bw=86.6MiB/s (90.8MB/s), 86.6MiB/s-86.6MiB/s (90.8MB/s-90.8MB/s), io=173MiB (182MB), run=2001-2001msec 00:09:23.770 ----------------------------------------------------- 00:09:23.770 Suppressions used: 00:09:23.770 count bytes template 00:09:23.770 1 32 /usr/src/fio/parse.c 00:09:23.770 1 8 libtcmalloc_minimal.so 00:09:23.770 ----------------------------------------------------- 00:09:23.770 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:23.770 11:25:22 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:23.770 11:25:22 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:23.770 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:23.770 fio-3.35 00:09:23.770 Starting 1 thread 00:09:30.348 00:09:30.348 test: (groupid=0, jobs=1): err= 0: pid=64168: Tue Nov 5 11:25:28 2024 00:09:30.348 read: IOPS=21.6k, BW=84.4MiB/s (88.5MB/s)(169MiB/2001msec) 00:09:30.348 slat (nsec): min=3794, max=78564, avg=5140.82, stdev=2353.18 00:09:30.348 clat (usec): min=279, max=10457, avg=2944.80, stdev=952.84 00:09:30.348 lat (usec): min=284, max=10510, avg=2949.94, stdev=953.93 00:09:30.348 clat percentiles (usec): 00:09:30.349 | 1.00th=[ 1991], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:30.349 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2573], 60.00th=[ 2704], 00:09:30.349 | 70.00th=[ 2868], 80.00th=[ 3261], 90.00th=[ 4228], 95.00th=[ 5145], 00:09:30.349 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8586], 99.95th=[ 8979], 00:09:30.349 | 99.99th=[10290] 00:09:30.349 bw ( KiB/s): min=75984, max=93000, per=99.04%, avg=85629.33, stdev=8733.08, samples=3 00:09:30.349 iops : min=18996, max=23250, avg=21407.33, stdev=2183.27, samples=3 00:09:30.349 write: IOPS=21.5k, BW=83.8MiB/s (87.9MB/s)(168MiB/2001msec); 0 zone resets 00:09:30.349 slat (usec): min=4, max=610, avg= 5.32, stdev= 3.78 00:09:30.349 clat (usec): min=340, max=10380, avg=2975.47, stdev=963.79 00:09:30.349 lat (usec): min=346, max=10399, avg=2980.79, stdev=964.89 00:09:30.349 clat percentiles (usec): 00:09:30.349 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2343], 20.00th=[ 2409], 00:09:30.349 | 30.00th=[ 2442], 40.00th=[ 2507], 50.00th=[ 2606], 60.00th=[ 2737], 00:09:30.349 | 70.00th=[ 2933], 80.00th=[ 3326], 90.00th=[ 4293], 95.00th=[ 5211], 00:09:30.349 | 99.00th=[ 6521], 99.50th=[ 7111], 99.90th=[ 8848], 99.95th=[ 9241], 00:09:30.349 | 99.99th=[10159] 00:09:30.349 bw ( KiB/s): min=77328, max=92688, per=99.94%, avg=85754.67, stdev=7788.13, samples=3 00:09:30.349 iops : min=19332, max=23172, avg=21438.67, stdev=1947.03, samples=3 00:09:30.349 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:09:30.349 lat (msec) : 2=0.90%, 4=87.40%, 10=11.65%, 20=0.02% 00:09:30.349 cpu : usr=99.05%, sys=0.05%, ctx=3, majf=0, minf=607 00:09:30.349 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:30.349 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:30.349 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:30.349 issued rwts: total=43253,42925,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:30.349 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:30.349 00:09:30.349 Run status group 0 (all jobs): 00:09:30.349 READ: bw=84.4MiB/s (88.5MB/s), 84.4MiB/s-84.4MiB/s (88.5MB/s-88.5MB/s), io=169MiB (177MB), run=2001-2001msec 00:09:30.349 WRITE: bw=83.8MiB/s (87.9MB/s), 83.8MiB/s-83.8MiB/s (87.9MB/s-87.9MB/s), io=168MiB (176MB), run=2001-2001msec 00:09:30.349 ----------------------------------------------------- 00:09:30.349 Suppressions used: 00:09:30.349 count bytes template 00:09:30.349 1 32 /usr/src/fio/parse.c 00:09:30.349 1 8 libtcmalloc_minimal.so 00:09:30.349 ----------------------------------------------------- 00:09:30.349 00:09:30.349 11:25:28 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:30.349 11:25:28 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:30.349 11:25:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:30.349 11:25:28 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:30.349 11:25:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:30.349 11:25:29 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:30.349 11:25:29 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:30.349 11:25:29 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:30.349 11:25:29 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:30.349 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:30.349 fio-3.35 00:09:30.349 Starting 1 thread 00:09:36.968 00:09:36.968 test: (groupid=0, jobs=1): err= 0: pid=64228: Tue Nov 5 11:25:35 2024 00:09:36.968 read: IOPS=19.9k, BW=77.9MiB/s (81.7MB/s)(156MiB/2001msec) 00:09:36.968 slat (nsec): min=3400, max=60271, avg=5217.14, stdev=2358.87 00:09:36.968 clat (usec): min=1135, max=9548, avg=3186.11, stdev=989.18 00:09:36.968 lat (usec): min=1141, max=9592, avg=3191.33, stdev=990.16 00:09:36.968 clat percentiles (usec): 00:09:36.968 | 1.00th=[ 2057], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:36.968 | 30.00th=[ 2606], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2999], 00:09:36.968 | 70.00th=[ 3228], 80.00th=[ 3785], 90.00th=[ 4752], 95.00th=[ 5342], 00:09:36.968 | 99.00th=[ 6521], 99.50th=[ 6915], 99.90th=[ 7504], 99.95th=[ 8356], 00:09:36.968 | 99.99th=[ 9503] 00:09:36.968 bw ( KiB/s): min=77880, max=83040, per=100.00%, avg=80320.00, stdev=2591.37, samples=3 00:09:36.968 iops : min=19470, max=20760, avg=20080.00, stdev=647.84, samples=3 00:09:36.968 write: IOPS=19.9k, BW=77.7MiB/s (81.5MB/s)(156MiB/2001msec); 0 zone resets 00:09:36.968 slat (nsec): min=3462, max=72235, avg=5357.16, stdev=2422.56 00:09:36.968 clat (usec): min=1146, max=9495, avg=3217.33, stdev=995.77 00:09:36.968 lat (usec): min=1151, max=9506, avg=3222.69, stdev=996.76 00:09:36.968 clat percentiles (usec): 00:09:36.968 | 1.00th=[ 2073], 5.00th=[ 2278], 10.00th=[ 2376], 20.00th=[ 2507], 00:09:36.968 | 30.00th=[ 2606], 40.00th=[ 2737], 50.00th=[ 2835], 60.00th=[ 3032], 00:09:36.968 | 70.00th=[ 3294], 80.00th=[ 3851], 90.00th=[ 4752], 95.00th=[ 5342], 00:09:36.968 | 99.00th=[ 6456], 99.50th=[ 6915], 99.90th=[ 7635], 99.95th=[ 8225], 00:09:36.968 | 99.99th=[ 8979] 00:09:36.968 bw ( KiB/s): min=78056, max=83016, per=100.00%, avg=80378.67, stdev=2494.93, samples=3 00:09:36.968 iops : min=19514, max=20754, avg=20094.67, stdev=623.73, samples=3 00:09:36.968 lat (msec) : 2=0.77%, 4=81.30%, 10=17.93% 00:09:36.968 cpu : usr=98.85%, sys=0.30%, ctx=15, majf=0, minf=607 00:09:36.968 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:36.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:36.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:36.968 issued rwts: total=39918,39827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:36.968 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:36.968 00:09:36.968 Run status group 0 (all jobs): 00:09:36.968 READ: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=156MiB (164MB), run=2001-2001msec 00:09:36.968 WRITE: bw=77.7MiB/s (81.5MB/s), 77.7MiB/s-77.7MiB/s (81.5MB/s-81.5MB/s), io=156MiB (163MB), run=2001-2001msec 00:09:36.968 ----------------------------------------------------- 00:09:36.968 Suppressions used: 00:09:36.968 count bytes template 00:09:36.968 1 32 /usr/src/fio/parse.c 00:09:36.968 1 8 libtcmalloc_minimal.so 00:09:36.968 ----------------------------------------------------- 00:09:36.968 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:36.968 11:25:35 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:36.968 11:25:36 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:36.968 11:25:36 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:36.968 11:25:36 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:37.227 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:37.227 fio-3.35 00:09:37.227 Starting 1 thread 00:09:47.199 00:09:47.199 test: (groupid=0, jobs=1): err= 0: pid=64289: Tue Nov 5 11:25:46 2024 00:09:47.199 read: IOPS=23.6k, BW=92.2MiB/s (96.6MB/s)(184MiB/2001msec) 00:09:47.199 slat (usec): min=4, max=125, avg= 5.09, stdev= 2.30 00:09:47.199 clat (usec): min=224, max=7905, avg=2709.75, stdev=817.04 00:09:47.199 lat (usec): min=237, max=7957, avg=2714.84, stdev=818.45 00:09:47.199 clat percentiles (usec): 00:09:47.199 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:47.199 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:47.199 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3490], 95.00th=[ 4686], 00:09:47.199 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[ 7570], 99.95th=[ 7635], 00:09:47.199 | 99.99th=[ 7767] 00:09:47.199 bw ( KiB/s): min=89584, max=96504, per=98.27%, avg=92738.67, stdev=3500.18, samples=3 00:09:47.199 iops : min=22398, max=24126, avg=23185.33, stdev=874.14, samples=3 00:09:47.199 write: IOPS=23.4k, BW=91.5MiB/s (96.0MB/s)(183MiB/2001msec); 0 zone resets 00:09:47.199 slat (usec): min=4, max=125, avg= 5.34, stdev= 2.27 00:09:47.199 clat (usec): min=300, max=7833, avg=2711.61, stdev=816.67 00:09:47.199 lat (usec): min=304, max=7850, avg=2716.96, stdev=818.07 00:09:47.199 clat percentiles (usec): 00:09:47.199 | 1.00th=[ 1713], 5.00th=[ 2147], 10.00th=[ 2311], 20.00th=[ 2376], 00:09:47.199 | 30.00th=[ 2409], 40.00th=[ 2442], 50.00th=[ 2474], 60.00th=[ 2507], 00:09:47.199 | 70.00th=[ 2573], 80.00th=[ 2704], 90.00th=[ 3458], 95.00th=[ 4686], 00:09:47.199 | 99.00th=[ 6325], 99.50th=[ 6652], 99.90th=[ 7570], 99.95th=[ 7635], 00:09:47.199 | 99.99th=[ 7701] 00:09:47.199 bw ( KiB/s): min=89072, max=97936, per=99.03%, avg=92821.33, stdev=4587.02, samples=3 00:09:47.199 iops : min=22268, max=24484, avg=23205.33, stdev=1146.75, samples=3 00:09:47.199 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:09:47.199 lat (msec) : 2=3.22%, 4=89.19%, 10=7.55% 00:09:47.199 cpu : usr=99.30%, sys=0.00%, ctx=3, majf=0, minf=605 00:09:47.199 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:47.199 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:47.199 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:47.199 issued rwts: total=47207,46890,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:47.199 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:47.199 00:09:47.199 Run status group 0 (all jobs): 00:09:47.199 READ: bw=92.2MiB/s (96.6MB/s), 92.2MiB/s-92.2MiB/s (96.6MB/s-96.6MB/s), io=184MiB (193MB), run=2001-2001msec 00:09:47.199 WRITE: bw=91.5MiB/s (96.0MB/s), 91.5MiB/s-91.5MiB/s (96.0MB/s-96.0MB/s), io=183MiB (192MB), run=2001-2001msec 00:09:47.458 ----------------------------------------------------- 00:09:47.458 Suppressions used: 00:09:47.458 count bytes template 00:09:47.458 1 32 /usr/src/fio/parse.c 00:09:47.458 1 8 libtcmalloc_minimal.so 00:09:47.458 ----------------------------------------------------- 00:09:47.458 00:09:47.458 11:25:46 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:47.458 11:25:46 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:47.458 00:09:47.458 real 0m31.057s 00:09:47.458 user 0m16.716s 00:09:47.458 sys 0m27.265s 00:09:47.458 11:25:46 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.458 11:25:46 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:47.458 ************************************ 00:09:47.458 END TEST nvme_fio 00:09:47.458 ************************************ 00:09:47.458 00:09:47.458 real 1m40.372s 00:09:47.458 user 3m36.945s 00:09:47.458 sys 0m37.878s 00:09:47.458 11:25:46 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:47.458 11:25:46 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:47.458 ************************************ 00:09:47.458 END TEST nvme 00:09:47.458 ************************************ 00:09:47.458 11:25:46 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:47.458 11:25:46 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:47.458 11:25:46 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:47.458 11:25:46 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:47.458 11:25:46 -- common/autotest_common.sh@10 -- # set +x 00:09:47.458 ************************************ 00:09:47.458 START TEST nvme_scc 00:09:47.458 ************************************ 00:09:47.458 11:25:46 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:47.458 * Looking for test storage... 00:09:47.458 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:47.458 11:25:46 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:47.458 11:25:46 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:09:47.458 11:25:46 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.717 --rc genhtml_branch_coverage=1 00:09:47.717 --rc genhtml_function_coverage=1 00:09:47.717 --rc genhtml_legend=1 00:09:47.717 --rc geninfo_all_blocks=1 00:09:47.717 --rc geninfo_unexecuted_blocks=1 00:09:47.717 00:09:47.717 ' 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.717 --rc genhtml_branch_coverage=1 00:09:47.717 --rc genhtml_function_coverage=1 00:09:47.717 --rc genhtml_legend=1 00:09:47.717 --rc geninfo_all_blocks=1 00:09:47.717 --rc geninfo_unexecuted_blocks=1 00:09:47.717 00:09:47.717 ' 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.717 --rc genhtml_branch_coverage=1 00:09:47.717 --rc genhtml_function_coverage=1 00:09:47.717 --rc genhtml_legend=1 00:09:47.717 --rc geninfo_all_blocks=1 00:09:47.717 --rc geninfo_unexecuted_blocks=1 00:09:47.717 00:09:47.717 ' 00:09:47.717 11:25:46 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:47.717 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:47.717 --rc genhtml_branch_coverage=1 00:09:47.717 --rc genhtml_function_coverage=1 00:09:47.717 --rc genhtml_legend=1 00:09:47.717 --rc geninfo_all_blocks=1 00:09:47.717 --rc geninfo_unexecuted_blocks=1 00:09:47.717 00:09:47.717 ' 00:09:47.717 11:25:46 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.717 11:25:46 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:47.717 11:25:46 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:47.717 11:25:46 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:47.717 11:25:46 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.717 11:25:46 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.717 11:25:46 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 11:25:46 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 11:25:46 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.717 11:25:46 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:47.718 11:25:46 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:47.718 11:25:46 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:47.718 11:25:46 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:47.718 11:25:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:47.718 11:25:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:47.718 11:25:46 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:47.718 11:25:46 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:47.976 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:47.976 Waiting for block devices as requested 00:09:48.234 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.234 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.234 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:48.234 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:53.508 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:53.508 11:25:52 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:53.508 11:25:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:53.508 11:25:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:53.508 11:25:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.508 11:25:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.508 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.509 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.510 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.511 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:53.512 11:25:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:53.512 11:25:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:53.512 11:25:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.512 11:25:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.512 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:53.513 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:53.514 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:53.515 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.516 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:53.517 11:25:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:53.517 11:25:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:53.517 11:25:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.517 11:25:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:53.517 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.518 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.519 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.520 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:53.521 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.522 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.523 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:53.524 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:53.525 11:25:52 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:53.525 11:25:52 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:53.525 11:25:52 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:53.525 11:25:52 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.525 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.526 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:53.527 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:53.528 11:25:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:53.528 11:25:52 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:53.528 11:25:52 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:54.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:54.353 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.611 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.611 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.611 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:54.611 11:25:53 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:54.611 11:25:53 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:54.611 11:25:53 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.611 11:25:53 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:54.611 ************************************ 00:09:54.611 START TEST nvme_simple_copy 00:09:54.611 ************************************ 00:09:54.611 11:25:53 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:54.869 Initializing NVMe Controllers 00:09:54.869 Attaching to 0000:00:10.0 00:09:54.869 Controller supports SCC. Attached to 0000:00:10.0 00:09:54.869 Namespace ID: 1 size: 6GB 00:09:54.869 Initialization complete. 00:09:54.869 00:09:54.869 Controller QEMU NVMe Ctrl (12340 ) 00:09:54.869 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:54.869 Namespace Block Size:4096 00:09:54.869 Writing LBAs 0 to 63 with Random Data 00:09:54.869 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:54.869 LBAs matching Written Data: 64 00:09:54.869 00:09:54.869 real 0m0.248s 00:09:54.869 user 0m0.087s 00:09:54.869 sys 0m0.059s 00:09:54.869 11:25:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.869 ************************************ 00:09:54.869 END TEST nvme_simple_copy 00:09:54.869 ************************************ 00:09:54.869 11:25:54 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:54.869 00:09:54.869 real 0m7.380s 00:09:54.869 user 0m1.028s 00:09:54.869 sys 0m1.281s 00:09:54.869 11:25:54 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:54.869 11:25:54 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:54.869 ************************************ 00:09:54.869 END TEST nvme_scc 00:09:54.869 ************************************ 00:09:54.869 11:25:54 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:54.869 11:25:54 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:54.869 11:25:54 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:54.869 11:25:54 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:54.869 11:25:54 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:54.869 11:25:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:54.869 11:25:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:54.869 11:25:54 -- common/autotest_common.sh@10 -- # set +x 00:09:54.869 ************************************ 00:09:54.869 START TEST nvme_fdp 00:09:54.869 ************************************ 00:09:54.869 11:25:54 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:09:54.869 * Looking for test storage... 00:09:54.869 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:54.869 11:25:54 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:54.869 11:25:54 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:54.869 11:25:54 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.128 --rc genhtml_branch_coverage=1 00:09:55.128 --rc genhtml_function_coverage=1 00:09:55.128 --rc genhtml_legend=1 00:09:55.128 --rc geninfo_all_blocks=1 00:09:55.128 --rc geninfo_unexecuted_blocks=1 00:09:55.128 00:09:55.128 ' 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.128 --rc genhtml_branch_coverage=1 00:09:55.128 --rc genhtml_function_coverage=1 00:09:55.128 --rc genhtml_legend=1 00:09:55.128 --rc geninfo_all_blocks=1 00:09:55.128 --rc geninfo_unexecuted_blocks=1 00:09:55.128 00:09:55.128 ' 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.128 --rc genhtml_branch_coverage=1 00:09:55.128 --rc genhtml_function_coverage=1 00:09:55.128 --rc genhtml_legend=1 00:09:55.128 --rc geninfo_all_blocks=1 00:09:55.128 --rc geninfo_unexecuted_blocks=1 00:09:55.128 00:09:55.128 ' 00:09:55.128 11:25:54 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:55.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.128 --rc genhtml_branch_coverage=1 00:09:55.128 --rc genhtml_function_coverage=1 00:09:55.128 --rc genhtml_legend=1 00:09:55.128 --rc geninfo_all_blocks=1 00:09:55.128 --rc geninfo_unexecuted_blocks=1 00:09:55.128 00:09:55.128 ' 00:09:55.128 11:25:54 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:55.128 11:25:54 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:55.128 11:25:54 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.128 11:25:54 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.128 11:25:54 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.128 11:25:54 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:55.128 11:25:54 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:55.128 11:25:54 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:55.128 11:25:54 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:55.128 11:25:54 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:55.387 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:55.387 Waiting for block devices as requested 00:09:55.683 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.683 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.683 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:55.683 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:00.974 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:00.974 11:25:59 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:00.974 11:25:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.974 11:25:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:00.974 11:25:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.974 11:25:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.974 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.975 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.976 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.977 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.978 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:00.979 11:25:59 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.979 11:25:59 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:00.979 11:25:59 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.979 11:25:59 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:00.979 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.980 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:25:59 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.981 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.982 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:00.983 11:26:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.983 11:26:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:00.983 11:26:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.983 11:26:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.983 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.984 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.985 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:00.986 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.987 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.988 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.989 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:00.990 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:00.991 11:26:00 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:00.991 11:26:00 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:00.991 11:26:00 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:00.991 11:26:00 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:00.991 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.992 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.993 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:00.994 11:26:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:00.994 11:26:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:00.995 11:26:00 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:00.995 11:26:00 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:00.995 11:26:00 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:00.995 11:26:00 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:01.561 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:01.818 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.818 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.818 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:01.818 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:02.076 11:26:01 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:02.076 11:26:01 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:02.076 11:26:01 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.076 11:26:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:02.077 ************************************ 00:10:02.077 START TEST nvme_flexible_data_placement 00:10:02.077 ************************************ 00:10:02.077 11:26:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:02.077 Initializing NVMe Controllers 00:10:02.077 Attaching to 0000:00:13.0 00:10:02.077 Controller supports FDP Attached to 0000:00:13.0 00:10:02.077 Namespace ID: 1 Endurance Group ID: 1 00:10:02.077 Initialization complete. 00:10:02.077 00:10:02.077 ================================== 00:10:02.077 == FDP tests for Namespace: #01 == 00:10:02.077 ================================== 00:10:02.077 00:10:02.077 Get Feature: FDP: 00:10:02.077 ================= 00:10:02.077 Enabled: Yes 00:10:02.077 FDP configuration Index: 0 00:10:02.077 00:10:02.077 FDP configurations log page 00:10:02.077 =========================== 00:10:02.077 Number of FDP configurations: 1 00:10:02.077 Version: 0 00:10:02.077 Size: 112 00:10:02.077 FDP Configuration Descriptor: 0 00:10:02.077 Descriptor Size: 96 00:10:02.077 Reclaim Group Identifier format: 2 00:10:02.077 FDP Volatile Write Cache: Not Present 00:10:02.077 FDP Configuration: Valid 00:10:02.077 Vendor Specific Size: 0 00:10:02.077 Number of Reclaim Groups: 2 00:10:02.077 Number of Recalim Unit Handles: 8 00:10:02.077 Max Placement Identifiers: 128 00:10:02.077 Number of Namespaces Suppprted: 256 00:10:02.077 Reclaim unit Nominal Size: 6000000 bytes 00:10:02.077 Estimated Reclaim Unit Time Limit: Not Reported 00:10:02.077 RUH Desc #000: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #001: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #002: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #003: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #004: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #005: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #006: RUH Type: Initially Isolated 00:10:02.077 RUH Desc #007: RUH Type: Initially Isolated 00:10:02.077 00:10:02.077 FDP reclaim unit handle usage log page 00:10:02.077 ====================================== 00:10:02.077 Number of Reclaim Unit Handles: 8 00:10:02.077 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:02.077 RUH Usage Desc #001: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #002: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #003: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #004: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #005: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #006: RUH Attributes: Unused 00:10:02.077 RUH Usage Desc #007: RUH Attributes: Unused 00:10:02.077 00:10:02.077 FDP statistics log page 00:10:02.077 ======================= 00:10:02.077 Host bytes with metadata written: 1002917888 00:10:02.077 Media bytes with metadata written: 1003159552 00:10:02.077 Media bytes erased: 0 00:10:02.077 00:10:02.077 FDP Reclaim unit handle status 00:10:02.077 ============================== 00:10:02.077 Number of RUHS descriptors: 2 00:10:02.077 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x000000000000038b 00:10:02.077 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:02.077 00:10:02.077 FDP write on placement id: 0 success 00:10:02.077 00:10:02.077 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:02.077 00:10:02.077 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:02.077 00:10:02.077 Get Feature: FDP Events for Placement handle: #0 00:10:02.077 ======================== 00:10:02.077 Number of FDP Events: 6 00:10:02.077 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:02.077 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:02.077 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:02.077 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:02.077 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:02.077 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:02.077 00:10:02.077 FDP events log page 00:10:02.077 =================== 00:10:02.077 Number of FDP events: 1 00:10:02.077 FDP Event #0: 00:10:02.077 Event Type: RU Not Written to Capacity 00:10:02.077 Placement Identifier: Valid 00:10:02.077 NSID: Valid 00:10:02.077 Location: Valid 00:10:02.077 Placement Identifier: 0 00:10:02.077 Event Timestamp: 5 00:10:02.077 Namespace Identifier: 1 00:10:02.077 Reclaim Group Identifier: 0 00:10:02.077 Reclaim Unit Handle Identifier: 0 00:10:02.077 00:10:02.077 FDP test passed 00:10:02.335 00:10:02.335 real 0m0.235s 00:10:02.335 user 0m0.073s 00:10:02.335 sys 0m0.060s 00:10:02.335 11:26:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.335 11:26:01 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:02.335 ************************************ 00:10:02.335 END TEST nvme_flexible_data_placement 00:10:02.335 ************************************ 00:10:02.335 00:10:02.335 real 0m7.322s 00:10:02.335 user 0m0.977s 00:10:02.335 sys 0m1.287s 00:10:02.335 11:26:01 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.335 11:26:01 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:02.335 ************************************ 00:10:02.335 END TEST nvme_fdp 00:10:02.335 ************************************ 00:10:02.335 11:26:01 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:02.335 11:26:01 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:02.335 11:26:01 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:02.335 11:26:01 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.335 11:26:01 -- common/autotest_common.sh@10 -- # set +x 00:10:02.335 ************************************ 00:10:02.335 START TEST nvme_rpc 00:10:02.335 ************************************ 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:02.335 * Looking for test storage... 00:10:02.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.335 11:26:01 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:02.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.335 --rc genhtml_branch_coverage=1 00:10:02.335 --rc genhtml_function_coverage=1 00:10:02.335 --rc genhtml_legend=1 00:10:02.335 --rc geninfo_all_blocks=1 00:10:02.335 --rc geninfo_unexecuted_blocks=1 00:10:02.335 00:10:02.335 ' 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:02.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.335 --rc genhtml_branch_coverage=1 00:10:02.335 --rc genhtml_function_coverage=1 00:10:02.335 --rc genhtml_legend=1 00:10:02.335 --rc geninfo_all_blocks=1 00:10:02.335 --rc geninfo_unexecuted_blocks=1 00:10:02.335 00:10:02.335 ' 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:02.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.335 --rc genhtml_branch_coverage=1 00:10:02.335 --rc genhtml_function_coverage=1 00:10:02.335 --rc genhtml_legend=1 00:10:02.335 --rc geninfo_all_blocks=1 00:10:02.335 --rc geninfo_unexecuted_blocks=1 00:10:02.335 00:10:02.335 ' 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:02.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.335 --rc genhtml_branch_coverage=1 00:10:02.335 --rc genhtml_function_coverage=1 00:10:02.335 --rc genhtml_legend=1 00:10:02.335 --rc geninfo_all_blocks=1 00:10:02.335 --rc geninfo_unexecuted_blocks=1 00:10:02.335 00:10:02.335 ' 00:10:02.335 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.335 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:02.335 11:26:01 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:02.593 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:02.593 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=65645 00:10:02.593 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:02.593 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:02.593 11:26:01 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 65645 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 65645 ']' 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:02.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:02.593 11:26:01 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.593 [2024-11-05 11:26:01.693267] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:02.593 [2024-11-05 11:26:01.693387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65645 ] 00:10:02.593 [2024-11-05 11:26:01.853327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.851 [2024-11-05 11:26:01.950544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.851 [2024-11-05 11:26:01.950617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.417 11:26:02 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:03.417 11:26:02 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:03.417 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:03.706 Nvme0n1 00:10:03.706 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:03.706 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:03.964 request: 00:10:03.964 { 00:10:03.964 "bdev_name": "Nvme0n1", 00:10:03.964 "filename": "non_existing_file", 00:10:03.964 "method": "bdev_nvme_apply_firmware", 00:10:03.964 "req_id": 1 00:10:03.964 } 00:10:03.964 Got JSON-RPC error response 00:10:03.964 response: 00:10:03.964 { 00:10:03.964 "code": -32603, 00:10:03.964 "message": "open file failed." 00:10:03.964 } 00:10:03.964 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:03.964 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:03.964 11:26:02 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:03.964 11:26:03 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:03.964 11:26:03 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 65645 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 65645 ']' 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 65645 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65645 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65645' 00:10:03.964 killing process with pid 65645 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@971 -- # kill 65645 00:10:03.964 11:26:03 nvme_rpc -- common/autotest_common.sh@976 -- # wait 65645 00:10:05.338 00:10:05.338 real 0m3.049s 00:10:05.338 user 0m5.852s 00:10:05.338 sys 0m0.479s 00:10:05.338 11:26:04 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:05.338 11:26:04 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 ************************************ 00:10:05.338 END TEST nvme_rpc 00:10:05.338 ************************************ 00:10:05.338 11:26:04 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:05.338 11:26:04 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:05.338 11:26:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:05.338 11:26:04 -- common/autotest_common.sh@10 -- # set +x 00:10:05.338 ************************************ 00:10:05.338 START TEST nvme_rpc_timeouts 00:10:05.338 ************************************ 00:10:05.338 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:05.338 * Looking for test storage... 00:10:05.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.338 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:05.338 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:10:05.338 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:05.596 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.596 11:26:04 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:05.596 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.596 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.596 --rc genhtml_branch_coverage=1 00:10:05.596 --rc genhtml_function_coverage=1 00:10:05.596 --rc genhtml_legend=1 00:10:05.596 --rc geninfo_all_blocks=1 00:10:05.596 --rc geninfo_unexecuted_blocks=1 00:10:05.596 00:10:05.596 ' 00:10:05.596 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.596 --rc genhtml_branch_coverage=1 00:10:05.596 --rc genhtml_function_coverage=1 00:10:05.596 --rc genhtml_legend=1 00:10:05.596 --rc geninfo_all_blocks=1 00:10:05.596 --rc geninfo_unexecuted_blocks=1 00:10:05.596 00:10:05.596 ' 00:10:05.596 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:05.596 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.596 --rc genhtml_branch_coverage=1 00:10:05.596 --rc genhtml_function_coverage=1 00:10:05.596 --rc genhtml_legend=1 00:10:05.596 --rc geninfo_all_blocks=1 00:10:05.596 --rc geninfo_unexecuted_blocks=1 00:10:05.596 00:10:05.596 ' 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:05.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.597 --rc genhtml_branch_coverage=1 00:10:05.597 --rc genhtml_function_coverage=1 00:10:05.597 --rc genhtml_legend=1 00:10:05.597 --rc geninfo_all_blocks=1 00:10:05.597 --rc geninfo_unexecuted_blocks=1 00:10:05.597 00:10:05.597 ' 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_65710 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_65710 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=65742 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 65742 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 65742 ']' 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:05.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:05.597 11:26:04 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:05.597 11:26:04 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:05.597 [2024-11-05 11:26:04.728393] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:10:05.597 [2024-11-05 11:26:04.728511] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65742 ] 00:10:05.855 [2024-11-05 11:26:04.881849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:05.855 [2024-11-05 11:26:04.961281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:05.855 [2024-11-05 11:26:04.961445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.422 11:26:05 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:06.422 Checking default timeout settings: 00:10:06.422 11:26:05 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:10:06.422 11:26:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:06.422 11:26:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:06.989 Making settings changes with rpc: 00:10:06.989 11:26:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:06.989 11:26:05 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:06.989 Check default vs. modified settings: 00:10:06.989 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:06.989 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_65710 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_65710 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:07.247 Setting action_on_timeout is changed as expected. 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_65710 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_65710 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:07.247 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:07.248 Setting timeout_us is changed as expected. 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_65710 00:10:07.248 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_65710 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:07.506 Setting timeout_admin_us is changed as expected. 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_65710 /tmp/settings_modified_65710 00:10:07.506 11:26:06 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 65742 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 65742 ']' 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 65742 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65742 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65742' 00:10:07.506 killing process with pid 65742 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 65742 00:10:07.506 11:26:06 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 65742 00:10:08.440 RPC TIMEOUT SETTING TEST PASSED. 00:10:08.440 11:26:07 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:08.440 00:10:08.440 real 0m3.195s 00:10:08.440 user 0m6.377s 00:10:08.440 sys 0m0.453s 00:10:08.440 11:26:07 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:08.440 11:26:07 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:08.440 ************************************ 00:10:08.440 END TEST nvme_rpc_timeouts 00:10:08.440 ************************************ 00:10:08.698 11:26:07 -- spdk/autotest.sh@239 -- # uname -s 00:10:08.699 11:26:07 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:08.699 11:26:07 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:08.699 11:26:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:08.699 11:26:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:08.699 11:26:07 -- common/autotest_common.sh@10 -- # set +x 00:10:08.699 ************************************ 00:10:08.699 START TEST sw_hotplug 00:10:08.699 ************************************ 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:08.699 * Looking for test storage... 00:10:08.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.699 11:26:07 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:08.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.699 --rc genhtml_branch_coverage=1 00:10:08.699 --rc genhtml_function_coverage=1 00:10:08.699 --rc genhtml_legend=1 00:10:08.699 --rc geninfo_all_blocks=1 00:10:08.699 --rc geninfo_unexecuted_blocks=1 00:10:08.699 00:10:08.699 ' 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:08.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.699 --rc genhtml_branch_coverage=1 00:10:08.699 --rc genhtml_function_coverage=1 00:10:08.699 --rc genhtml_legend=1 00:10:08.699 --rc geninfo_all_blocks=1 00:10:08.699 --rc geninfo_unexecuted_blocks=1 00:10:08.699 00:10:08.699 ' 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:08.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.699 --rc genhtml_branch_coverage=1 00:10:08.699 --rc genhtml_function_coverage=1 00:10:08.699 --rc genhtml_legend=1 00:10:08.699 --rc geninfo_all_blocks=1 00:10:08.699 --rc geninfo_unexecuted_blocks=1 00:10:08.699 00:10:08.699 ' 00:10:08.699 11:26:07 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:08.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.699 --rc genhtml_branch_coverage=1 00:10:08.699 --rc genhtml_function_coverage=1 00:10:08.699 --rc genhtml_legend=1 00:10:08.699 --rc geninfo_all_blocks=1 00:10:08.699 --rc geninfo_unexecuted_blocks=1 00:10:08.699 00:10:08.699 ' 00:10:08.699 11:26:07 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:08.957 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.216 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.216 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.216 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.216 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:09.216 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:09.216 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:09.216 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:09.216 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:09.216 11:26:08 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:09.216 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:09.217 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:09.217 11:26:08 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:09.475 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:09.733 Waiting for block devices as requested 00:10:09.733 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.733 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.733 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:09.733 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:15.001 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:15.001 11:26:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:15.001 11:26:14 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:15.259 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:15.259 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:15.259 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:15.518 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:15.776 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:15.776 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:15.776 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:15.776 11:26:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=66602 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:16.035 11:26:15 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:16.035 11:26:15 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:16.035 11:26:15 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:16.035 11:26:15 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:16.035 11:26:15 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:16.035 11:26:15 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:16.293 Initializing NVMe Controllers 00:10:16.293 Attaching to 0000:00:10.0 00:10:16.293 Attaching to 0000:00:11.0 00:10:16.293 Attached to 0000:00:10.0 00:10:16.293 Attached to 0000:00:11.0 00:10:16.293 Initialization complete. Starting I/O... 00:10:16.293 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:16.293 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:16.293 00:10:17.228 QEMU NVMe Ctrl (12340 ): 2656 I/Os completed (+2656) 00:10:17.228 QEMU NVMe Ctrl (12341 ): 2656 I/Os completed (+2656) 00:10:17.228 00:10:18.163 QEMU NVMe Ctrl (12340 ): 6375 I/Os completed (+3719) 00:10:18.163 QEMU NVMe Ctrl (12341 ): 6369 I/Os completed (+3713) 00:10:18.163 00:10:19.097 QEMU NVMe Ctrl (12340 ): 10075 I/Os completed (+3700) 00:10:19.097 QEMU NVMe Ctrl (12341 ): 10048 I/Os completed (+3679) 00:10:19.097 00:10:20.068 QEMU NVMe Ctrl (12340 ): 13766 I/Os completed (+3691) 00:10:20.068 QEMU NVMe Ctrl (12341 ): 13733 I/Os completed (+3685) 00:10:20.068 00:10:21.442 QEMU NVMe Ctrl (12340 ): 17440 I/Os completed (+3674) 00:10:21.442 QEMU NVMe Ctrl (12341 ): 17420 I/Os completed (+3687) 00:10:21.442 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.009 [2024-11-05 11:26:21.127194] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:22.009 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:22.009 [2024-11-05 11:26:21.128162] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.128206] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.128221] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.128237] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:22.009 [2024-11-05 11:26:21.129776] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.129829] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.129842] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.129855] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/subsystem_device 00:10:22.009 EAL: Scan for (pci) bus failed. 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:22.009 [2024-11-05 11:26:21.150936] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:22.009 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:22.009 [2024-11-05 11:26:21.151767] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.151814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.151832] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.151846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:22.009 [2024-11-05 11:26:21.153165] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.153197] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.153211] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 [2024-11-05 11:26:21.153222] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:22.009 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:22.009 EAL: Scan for (pci) bus failed. 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.009 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:22.267 Attaching to 0000:00:10.0 00:10:22.267 Attached to 0000:00:10.0 00:10:22.267 QEMU NVMe Ctrl (12340 ): 40 I/Os completed (+40) 00:10:22.267 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:22.267 11:26:21 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:22.267 Attaching to 0000:00:11.0 00:10:22.267 Attached to 0000:00:11.0 00:10:23.201 QEMU NVMe Ctrl (12340 ): 3723 I/Os completed (+3683) 00:10:23.201 QEMU NVMe Ctrl (12341 ): 3426 I/Os completed (+3426) 00:10:23.201 00:10:24.134 QEMU NVMe Ctrl (12340 ): 7411 I/Os completed (+3688) 00:10:24.134 QEMU NVMe Ctrl (12341 ): 7101 I/Os completed (+3675) 00:10:24.134 00:10:25.067 QEMU NVMe Ctrl (12340 ): 11174 I/Os completed (+3763) 00:10:25.067 QEMU NVMe Ctrl (12341 ): 10881 I/Os completed (+3780) 00:10:25.067 00:10:26.440 QEMU NVMe Ctrl (12340 ): 14442 I/Os completed (+3268) 00:10:26.440 QEMU NVMe Ctrl (12341 ): 14149 I/Os completed (+3268) 00:10:26.440 00:10:27.374 QEMU NVMe Ctrl (12340 ): 18120 I/Os completed (+3678) 00:10:27.374 QEMU NVMe Ctrl (12341 ): 17800 I/Os completed (+3651) 00:10:27.374 00:10:28.307 QEMU NVMe Ctrl (12340 ): 21361 I/Os completed (+3241) 00:10:28.307 QEMU NVMe Ctrl (12341 ): 21040 I/Os completed (+3240) 00:10:28.307 00:10:29.242 QEMU NVMe Ctrl (12340 ): 24952 I/Os completed (+3591) 00:10:29.242 QEMU NVMe Ctrl (12341 ): 24630 I/Os completed (+3590) 00:10:29.242 00:10:30.177 QEMU NVMe Ctrl (12340 ): 28647 I/Os completed (+3695) 00:10:30.177 QEMU NVMe Ctrl (12341 ): 28334 I/Os completed (+3704) 00:10:30.177 00:10:31.111 QEMU NVMe Ctrl (12340 ): 31922 I/Os completed (+3275) 00:10:31.111 QEMU NVMe Ctrl (12341 ): 31636 I/Os completed (+3302) 00:10:31.111 00:10:32.486 QEMU NVMe Ctrl (12340 ): 35351 I/Os completed (+3429) 00:10:32.486 QEMU NVMe Ctrl (12341 ): 35042 I/Os completed (+3406) 00:10:32.486 00:10:33.052 QEMU NVMe Ctrl (12340 ): 38541 I/Os completed (+3190) 00:10:33.052 QEMU NVMe Ctrl (12341 ): 38232 I/Os completed (+3190) 00:10:33.052 00:10:34.429 QEMU NVMe Ctrl (12340 ): 41868 I/Os completed (+3327) 00:10:34.429 QEMU NVMe Ctrl (12341 ): 41565 I/Os completed (+3333) 00:10:34.429 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.429 [2024-11-05 11:26:33.391938] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:34.429 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:34.429 [2024-11-05 11:26:33.392869] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.392913] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.392931] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.392946] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:34.429 [2024-11-05 11:26:33.394561] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.394605] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.394616] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.394628] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:34.429 [2024-11-05 11:26:33.409516] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:34.429 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:34.429 [2024-11-05 11:26:33.410399] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.410435] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.410453] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.410465] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:34.429 [2024-11-05 11:26:33.411817] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.411850] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.411862] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 [2024-11-05 11:26:33.411874] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:34.429 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:34.429 EAL: Scan for (pci) bus failed. 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:34.429 Attaching to 0000:00:10.0 00:10:34.429 Attached to 0000:00:10.0 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:34.429 11:26:33 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:34.429 Attaching to 0000:00:11.0 00:10:34.429 Attached to 0000:00:11.0 00:10:35.364 QEMU NVMe Ctrl (12340 ): 2727 I/Os completed (+2727) 00:10:35.364 QEMU NVMe Ctrl (12341 ): 2400 I/Os completed (+2400) 00:10:35.364 00:10:36.299 QEMU NVMe Ctrl (12340 ): 6346 I/Os completed (+3619) 00:10:36.299 QEMU NVMe Ctrl (12341 ): 6024 I/Os completed (+3624) 00:10:36.299 00:10:37.232 QEMU NVMe Ctrl (12340 ): 9951 I/Os completed (+3605) 00:10:37.232 QEMU NVMe Ctrl (12341 ): 9640 I/Os completed (+3616) 00:10:37.232 00:10:38.166 QEMU NVMe Ctrl (12340 ): 13585 I/Os completed (+3634) 00:10:38.166 QEMU NVMe Ctrl (12341 ): 13260 I/Os completed (+3620) 00:10:38.166 00:10:39.118 QEMU NVMe Ctrl (12340 ): 17188 I/Os completed (+3603) 00:10:39.118 QEMU NVMe Ctrl (12341 ): 16875 I/Os completed (+3615) 00:10:39.118 00:10:40.500 QEMU NVMe Ctrl (12340 ): 20814 I/Os completed (+3626) 00:10:40.500 QEMU NVMe Ctrl (12341 ): 20507 I/Os completed (+3632) 00:10:40.500 00:10:41.069 QEMU NVMe Ctrl (12340 ): 24252 I/Os completed (+3438) 00:10:41.069 QEMU NVMe Ctrl (12341 ): 23923 I/Os completed (+3416) 00:10:41.069 00:10:42.444 QEMU NVMe Ctrl (12340 ): 27500 I/Os completed (+3248) 00:10:42.444 QEMU NVMe Ctrl (12341 ): 27167 I/Os completed (+3244) 00:10:42.444 00:10:43.378 QEMU NVMe Ctrl (12340 ): 30606 I/Os completed (+3106) 00:10:43.378 QEMU NVMe Ctrl (12341 ): 30290 I/Os completed (+3123) 00:10:43.378 00:10:44.312 QEMU NVMe Ctrl (12340 ): 33818 I/Os completed (+3212) 00:10:44.312 QEMU NVMe Ctrl (12341 ): 33502 I/Os completed (+3212) 00:10:44.312 00:10:45.245 QEMU NVMe Ctrl (12340 ): 37255 I/Os completed (+3437) 00:10:45.245 QEMU NVMe Ctrl (12341 ): 36943 I/Os completed (+3441) 00:10:45.245 00:10:46.178 QEMU NVMe Ctrl (12340 ): 40904 I/Os completed (+3649) 00:10:46.178 QEMU NVMe Ctrl (12341 ): 40597 I/Os completed (+3654) 00:10:46.178 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.436 [2024-11-05 11:26:45.666672] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:46.436 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:46.436 [2024-11-05 11:26:45.667674] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.667722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.667738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.667753] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:46.436 [2024-11-05 11:26:45.669439] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.669480] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.669492] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.669506] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:10:46.436 EAL: Scan for (pci) bus failed. 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:46.436 [2024-11-05 11:26:45.685489] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:46.436 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:46.436 [2024-11-05 11:26:45.686626] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.686667] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.686682] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.686695] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:46.436 [2024-11-05 11:26:45.688072] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.688105] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.688119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 [2024-11-05 11:26:45.688129] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:46.436 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:46.436 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:46.436 EAL: Scan for (pci) bus failed. 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:46.693 Attaching to 0000:00:10.0 00:10:46.693 Attached to 0000:00:10.0 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:46.693 11:26:45 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:46.693 Attaching to 0000:00:11.0 00:10:46.693 Attached to 0000:00:11.0 00:10:46.693 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:46.693 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:46.693 [2024-11-05 11:26:45.931793] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:58.928 11:26:57 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:58.928 11:26:57 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:58.928 11:26:57 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.80 00:10:58.928 11:26:57 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.80 00:10:58.928 11:26:57 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:58.928 11:26:57 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.80 00:10:58.928 11:26:57 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.80 2 00:10:58.928 remove_attach_helper took 42.80s to complete (handling 2 nvme drive(s)) 11:26:57 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 66602 00:11:05.528 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (66602) - No such process 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 66602 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67144 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67144 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 67144 ']' 00:11:05.528 11:27:03 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:05.528 11:27:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.528 [2024-11-05 11:27:04.017351] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:11:05.528 [2024-11-05 11:27:04.017470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67144 ] 00:11:05.528 [2024-11-05 11:27:04.175408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:05.528 [2024-11-05 11:27:04.270394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:05.786 11:27:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:05.786 11:27:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.343 11:27:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.343 11:27:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.343 11:27:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:12.343 11:27:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:12.343 [2024-11-05 11:27:10.953002] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:12.343 [2024-11-05 11:27:10.954227] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:10.954261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:10.954274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:10.954291] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:10.954299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:10.954307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:10.954314] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:10.954322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:10.954328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:10.954340] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:10.954346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:10.954354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:11.352991] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:12.343 [2024-11-05 11:27:11.354284] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:11.354312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:11.354323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:11.354336] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:11.354345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:11.354352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:11.354361] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:11.354367] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:11.354375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 [2024-11-05 11:27:11.354382] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:12.343 [2024-11-05 11:27:11.354390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:12.343 [2024-11-05 11:27:11.354396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:12.343 11:27:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:12.343 11:27:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.343 11:27:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:12.343 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:12.603 11:27:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:24.818 11:27:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:24.818 11:27:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:24.818 [2024-11-05 11:27:23.853198] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:24.818 [2024-11-05 11:27:23.854395] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.818 [2024-11-05 11:27:23.854426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.818 [2024-11-05 11:27:23.854436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.818 [2024-11-05 11:27:23.854451] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.818 [2024-11-05 11:27:23.854459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.818 [2024-11-05 11:27:23.854467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.818 [2024-11-05 11:27:23.854474] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.818 [2024-11-05 11:27:23.854481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.818 [2024-11-05 11:27:23.854488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:24.818 [2024-11-05 11:27:23.854496] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:24.818 [2024-11-05 11:27:23.854503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:24.818 [2024-11-05 11:27:23.854510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.076 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.076 11:27:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.076 11:27:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.076 11:27:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.333 [2024-11-05 11:27:24.353202] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:25.333 [2024-11-05 11:27:24.354461] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.333 [2024-11-05 11:27:24.354492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.333 [2024-11-05 11:27:24.354505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.333 [2024-11-05 11:27:24.354518] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.333 [2024-11-05 11:27:24.354527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.333 [2024-11-05 11:27:24.354534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.333 [2024-11-05 11:27:24.354542] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.333 [2024-11-05 11:27:24.354549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.333 [2024-11-05 11:27:24.354557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.333 [2024-11-05 11:27:24.354564] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:25.333 [2024-11-05 11:27:24.354572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:25.333 [2024-11-05 11:27:24.354578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:25.333 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:25.333 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:25.593 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:25.593 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:25.855 11:27:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.855 11:27:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:25.855 11:27:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.855 11:27:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:25.855 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:25.855 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:25.855 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:25.855 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:25.855 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:26.115 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:26.115 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:26.115 11:27:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.347 11:27:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:38.347 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:38.347 [2024-11-05 11:27:37.253418] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:38.347 [2024-11-05 11:27:37.254644] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.347 [2024-11-05 11:27:37.254678] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.347 [2024-11-05 11:27:37.254689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.347 [2024-11-05 11:27:37.254704] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.347 [2024-11-05 11:27:37.254711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.347 [2024-11-05 11:27:37.254721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.347 [2024-11-05 11:27:37.254728] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.347 [2024-11-05 11:27:37.254736] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.347 [2024-11-05 11:27:37.254742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.347 [2024-11-05 11:27:37.254750] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.347 [2024-11-05 11:27:37.254757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.347 [2024-11-05 11:27:37.254764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.606 [2024-11-05 11:27:37.653415] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:38.606 [2024-11-05 11:27:37.654550] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.606 [2024-11-05 11:27:37.654580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.606 [2024-11-05 11:27:37.654591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.606 [2024-11-05 11:27:37.654604] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.606 [2024-11-05 11:27:37.654612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.606 [2024-11-05 11:27:37.654619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.606 [2024-11-05 11:27:37.654627] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.606 [2024-11-05 11:27:37.654634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.606 [2024-11-05 11:27:37.654643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.606 [2024-11-05 11:27:37.654649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:38.606 [2024-11-05 11:27:37.654657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:38.606 [2024-11-05 11:27:37.654663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:38.606 11:27:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.606 11:27:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:38.606 11:27:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.606 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:38.865 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:38.865 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.865 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:38.865 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:38.865 11:27:37 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:38.865 11:27:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:38.865 11:27:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:38.865 11:27:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.21 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.21 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.21 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.21 2 00:11:51.145 remove_attach_helper took 45.21s to complete (handling 2 nvme drive(s)) 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:51.145 11:27:50 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:51.145 11:27:50 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:57.751 [2024-11-05 11:27:56.190963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:57.751 [2024-11-05 11:27:56.191894] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.191922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.191932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.191947] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.191955] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.191963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.191970] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.191978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.191984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.191992] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.191998] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.192008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:57.751 11:27:56 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.751 [2024-11-05 11:27:56.690963] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:57.751 [2024-11-05 11:27:56.692072] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.692101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.692112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.692123] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.692131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.692138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.692147] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.692153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.692161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 [2024-11-05 11:27:56.692169] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:57.751 [2024-11-05 11:27:56.692176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:57.751 [2024-11-05 11:27:56.692183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:57.751 11:27:56 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:58.013 11:27:57 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.013 11:27:57 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:58.013 11:27:57 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:58.013 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:58.275 11:27:57 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:10.507 11:28:09 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:10.507 11:28:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:10.507 [2024-11-05 11:28:09.591198] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:10.507 [2024-11-05 11:28:09.592376] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.507 [2024-11-05 11:28:09.592403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.507 [2024-11-05 11:28:09.592414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.507 [2024-11-05 11:28:09.592431] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.507 [2024-11-05 11:28:09.592438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.507 [2024-11-05 11:28:09.592447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.507 [2024-11-05 11:28:09.592454] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.507 [2024-11-05 11:28:09.592462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.507 [2024-11-05 11:28:09.592468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:10.507 [2024-11-05 11:28:09.592476] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:10.507 [2024-11-05 11:28:09.592483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:10.507 [2024-11-05 11:28:09.592491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.074 11:28:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.074 11:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.074 11:28:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:12:11.074 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:11.074 [2024-11-05 11:28:10.291207] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:11.074 [2024-11-05 11:28:10.292131] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.074 [2024-11-05 11:28:10.292161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.074 [2024-11-05 11:28:10.292173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.074 [2024-11-05 11:28:10.292188] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.074 [2024-11-05 11:28:10.292198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.074 [2024-11-05 11:28:10.292205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.074 [2024-11-05 11:28:10.292213] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.074 [2024-11-05 11:28:10.292220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.074 [2024-11-05 11:28:10.292228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.074 [2024-11-05 11:28:10.292236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:11.074 [2024-11-05 11:28:10.292243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:11.074 [2024-11-05 11:28:10.292250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:11.641 11:28:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:11.641 11:28:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:11.641 11:28:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:11.641 11:28:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:23.837 11:28:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.837 [2024-11-05 11:28:22.991436] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:23.837 [2024-11-05 11:28:22.992649] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.837 [2024-11-05 11:28:22.992748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.837 [2024-11-05 11:28:22.992839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.837 [2024-11-05 11:28:22.992909] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.837 [2024-11-05 11:28:22.992930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.837 [2024-11-05 11:28:22.992989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.837 [2024-11-05 11:28:22.993018] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.837 [2024-11-05 11:28:22.993038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.837 [2024-11-05 11:28:22.993122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.837 [2024-11-05 11:28:22.993151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:23.837 [2024-11-05 11:28:22.993195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:23.837 [2024-11-05 11:28:22.993223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:23.837 11:28:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:24.405 [2024-11-05 11:28:23.491443] nvme_ctrlr.c:1083:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:24.405 [2024-11-05 11:28:23.492382] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.405 [2024-11-05 11:28:23.492474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.405 [2024-11-05 11:28:23.492536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.405 [2024-11-05 11:28:23.492591] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.405 [2024-11-05 11:28:23.492612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.405 [2024-11-05 11:28:23.492658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.405 [2024-11-05 11:28:23.492709] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.405 [2024-11-05 11:28:23.492726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.405 [2024-11-05 11:28:23.492779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.405 [2024-11-05 11:28:23.492818] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:24.405 [2024-11-05 11:28:23.492842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:24.405 [2024-11-05 11:28:23.492865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:24.405 11:28:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.405 11:28:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:24.405 11:28:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.405 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:24.666 11:28:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.71 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.71 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.71 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.71 2 00:12:36.891 remove_attach_helper took 45.71s to complete (handling 2 nvme drive(s)) 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:36.891 11:28:35 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67144 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 67144 ']' 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 67144 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67144 00:12:36.891 killing process with pid 67144 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67144' 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@971 -- # kill 67144 00:12:36.891 11:28:35 sw_hotplug -- common/autotest_common.sh@976 -- # wait 67144 00:12:37.833 11:28:37 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:38.094 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:38.355 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:38.355 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:38.619 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.619 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:38.619 00:12:38.619 real 2m30.019s 00:12:38.619 user 1m52.149s 00:12:38.619 sys 0m16.586s 00:12:38.619 ************************************ 00:12:38.619 END TEST sw_hotplug 00:12:38.619 ************************************ 00:12:38.619 11:28:37 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:38.619 11:28:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 11:28:37 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:38.619 11:28:37 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:38.619 11:28:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:38.619 11:28:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.619 11:28:37 -- common/autotest_common.sh@10 -- # set +x 00:12:38.619 ************************************ 00:12:38.619 START TEST nvme_xnvme 00:12:38.619 ************************************ 00:12:38.619 11:28:37 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:38.619 * Looking for test storage... 00:12:38.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:38.619 11:28:37 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:38.619 11:28:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:38.619 11:28:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:38.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.879 --rc genhtml_branch_coverage=1 00:12:38.879 --rc genhtml_function_coverage=1 00:12:38.879 --rc genhtml_legend=1 00:12:38.879 --rc geninfo_all_blocks=1 00:12:38.879 --rc geninfo_unexecuted_blocks=1 00:12:38.879 00:12:38.879 ' 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:38.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.879 --rc genhtml_branch_coverage=1 00:12:38.879 --rc genhtml_function_coverage=1 00:12:38.879 --rc genhtml_legend=1 00:12:38.879 --rc geninfo_all_blocks=1 00:12:38.879 --rc geninfo_unexecuted_blocks=1 00:12:38.879 00:12:38.879 ' 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:38.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.879 --rc genhtml_branch_coverage=1 00:12:38.879 --rc genhtml_function_coverage=1 00:12:38.879 --rc genhtml_legend=1 00:12:38.879 --rc geninfo_all_blocks=1 00:12:38.879 --rc geninfo_unexecuted_blocks=1 00:12:38.879 00:12:38.879 ' 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:38.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:38.879 --rc genhtml_branch_coverage=1 00:12:38.879 --rc genhtml_function_coverage=1 00:12:38.879 --rc genhtml_legend=1 00:12:38.879 --rc geninfo_all_blocks=1 00:12:38.879 --rc geninfo_unexecuted_blocks=1 00:12:38.879 00:12:38.879 ' 00:12:38.879 11:28:37 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:38.879 11:28:37 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:38.879 11:28:37 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.879 11:28:37 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.879 11:28:37 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.879 11:28:37 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:38.879 11:28:37 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:38.879 11:28:37 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:38.879 11:28:37 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:12:38.880 11:28:37 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:38.880 11:28:37 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:38.880 ************************************ 00:12:38.880 START TEST xnvme_to_malloc_dd_copy 00:12:38.880 ************************************ 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:38.880 11:28:37 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:38.880 { 00:12:38.880 "subsystems": [ 00:12:38.880 { 00:12:38.880 "subsystem": "bdev", 00:12:38.880 "config": [ 00:12:38.880 { 00:12:38.880 "params": { 00:12:38.880 "block_size": 512, 00:12:38.880 "num_blocks": 2097152, 00:12:38.880 "name": "malloc0" 00:12:38.880 }, 00:12:38.880 "method": "bdev_malloc_create" 00:12:38.880 }, 00:12:38.880 { 00:12:38.880 "params": { 00:12:38.880 "io_mechanism": "libaio", 00:12:38.880 "filename": "/dev/nullb0", 00:12:38.880 "name": "null0" 00:12:38.880 }, 00:12:38.880 "method": "bdev_xnvme_create" 00:12:38.880 }, 00:12:38.880 { 00:12:38.880 "method": "bdev_wait_for_examine" 00:12:38.880 } 00:12:38.880 ] 00:12:38.880 } 00:12:38.880 ] 00:12:38.880 } 00:12:38.880 [2024-11-05 11:28:38.035560] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:38.880 [2024-11-05 11:28:38.035773] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68539 ] 00:12:39.138 [2024-11-05 11:28:38.196059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:39.138 [2024-11-05 11:28:38.290011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:41.039  [2024-11-05T11:28:41.247Z] Copying: 231/1024 [MB] (231 MBps) [2024-11-05T11:28:42.633Z] Copying: 521/1024 [MB] (290 MBps) [2024-11-05T11:28:42.892Z] Copying: 823/1024 [MB] (301 MBps) [2024-11-05T11:28:44.790Z] Copying: 1024/1024 [MB] (average 279 MBps) 00:12:45.516 00:12:45.516 11:28:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:45.516 11:28:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:45.516 11:28:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:45.516 11:28:44 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:45.775 { 00:12:45.775 "subsystems": [ 00:12:45.775 { 00:12:45.775 "subsystem": "bdev", 00:12:45.775 "config": [ 00:12:45.775 { 00:12:45.775 "params": { 00:12:45.775 "block_size": 512, 00:12:45.775 "num_blocks": 2097152, 00:12:45.775 "name": "malloc0" 00:12:45.775 }, 00:12:45.775 "method": "bdev_malloc_create" 00:12:45.775 }, 00:12:45.775 { 00:12:45.775 "params": { 00:12:45.775 "io_mechanism": "libaio", 00:12:45.775 "filename": "/dev/nullb0", 00:12:45.775 "name": "null0" 00:12:45.775 }, 00:12:45.775 "method": "bdev_xnvme_create" 00:12:45.775 }, 00:12:45.775 { 00:12:45.775 "method": "bdev_wait_for_examine" 00:12:45.775 } 00:12:45.775 ] 00:12:45.775 } 00:12:45.775 ] 00:12:45.775 } 00:12:45.775 [2024-11-05 11:28:44.814873] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:45.775 [2024-11-05 11:28:44.814962] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68621 ] 00:12:45.775 [2024-11-05 11:28:44.964351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:45.775 [2024-11-05 11:28:45.040617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.676  [2024-11-05T11:28:47.882Z] Copying: 302/1024 [MB] (302 MBps) [2024-11-05T11:28:48.816Z] Copying: 605/1024 [MB] (303 MBps) [2024-11-05T11:28:49.380Z] Copying: 907/1024 [MB] (302 MBps) [2024-11-05T11:28:51.277Z] Copying: 1024/1024 [MB] (average 302 MBps) 00:12:52.003 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:52.003 11:28:51 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:52.003 { 00:12:52.003 "subsystems": [ 00:12:52.003 { 00:12:52.003 "subsystem": "bdev", 00:12:52.003 "config": [ 00:12:52.003 { 00:12:52.003 "params": { 00:12:52.003 "block_size": 512, 00:12:52.003 "num_blocks": 2097152, 00:12:52.003 "name": "malloc0" 00:12:52.003 }, 00:12:52.003 "method": "bdev_malloc_create" 00:12:52.003 }, 00:12:52.003 { 00:12:52.003 "params": { 00:12:52.003 "io_mechanism": "io_uring", 00:12:52.003 "filename": "/dev/nullb0", 00:12:52.003 "name": "null0" 00:12:52.003 }, 00:12:52.003 "method": "bdev_xnvme_create" 00:12:52.003 }, 00:12:52.003 { 00:12:52.003 "method": "bdev_wait_for_examine" 00:12:52.003 } 00:12:52.003 ] 00:12:52.003 } 00:12:52.003 ] 00:12:52.003 } 00:12:52.003 [2024-11-05 11:28:51.155256] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:52.003 [2024-11-05 11:28:51.155369] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68704 ] 00:12:52.262 [2024-11-05 11:28:51.311513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:52.262 [2024-11-05 11:28:51.387629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.161  [2024-11-05T11:28:54.368Z] Copying: 309/1024 [MB] (309 MBps) [2024-11-05T11:28:55.331Z] Copying: 618/1024 [MB] (309 MBps) [2024-11-05T11:28:55.589Z] Copying: 927/1024 [MB] (309 MBps) [2024-11-05T11:28:57.488Z] Copying: 1024/1024 [MB] (average 309 MBps) 00:12:58.214 00:12:58.214 11:28:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:58.214 11:28:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:58.214 11:28:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:58.214 11:28:57 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:58.214 { 00:12:58.214 "subsystems": [ 00:12:58.214 { 00:12:58.214 "subsystem": "bdev", 00:12:58.214 "config": [ 00:12:58.214 { 00:12:58.214 "params": { 00:12:58.214 "block_size": 512, 00:12:58.214 "num_blocks": 2097152, 00:12:58.214 "name": "malloc0" 00:12:58.214 }, 00:12:58.214 "method": "bdev_malloc_create" 00:12:58.214 }, 00:12:58.214 { 00:12:58.214 "params": { 00:12:58.214 "io_mechanism": "io_uring", 00:12:58.214 "filename": "/dev/nullb0", 00:12:58.214 "name": "null0" 00:12:58.214 }, 00:12:58.214 "method": "bdev_xnvme_create" 00:12:58.214 }, 00:12:58.214 { 00:12:58.214 "method": "bdev_wait_for_examine" 00:12:58.214 } 00:12:58.214 ] 00:12:58.214 } 00:12:58.214 ] 00:12:58.214 } 00:12:58.214 [2024-11-05 11:28:57.373820] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:12:58.214 [2024-11-05 11:28:57.373932] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68780 ] 00:12:58.472 [2024-11-05 11:28:57.528413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.472 [2024-11-05 11:28:57.604260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.372  [2024-11-05T11:29:00.579Z] Copying: 318/1024 [MB] (318 MBps) [2024-11-05T11:29:01.513Z] Copying: 635/1024 [MB] (317 MBps) [2024-11-05T11:29:01.804Z] Copying: 954/1024 [MB] (318 MBps) [2024-11-05T11:29:03.704Z] Copying: 1024/1024 [MB] (average 318 MBps) 00:13:04.430 00:13:04.430 11:29:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:04.430 11:29:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:04.430 00:13:04.430 real 0m25.513s 00:13:04.430 user 0m22.528s 00:13:04.430 sys 0m2.491s 00:13:04.430 11:29:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.430 11:29:03 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:04.430 ************************************ 00:13:04.430 END TEST xnvme_to_malloc_dd_copy 00:13:04.430 ************************************ 00:13:04.430 11:29:03 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:04.430 11:29:03 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:04.430 11:29:03 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.430 11:29:03 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.430 ************************************ 00:13:04.430 START TEST xnvme_bdevperf 00:13:04.430 ************************************ 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.430 11:29:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.430 { 00:13:04.430 "subsystems": [ 00:13:04.430 { 00:13:04.430 "subsystem": "bdev", 00:13:04.430 "config": [ 00:13:04.430 { 00:13:04.430 "params": { 00:13:04.430 "io_mechanism": "libaio", 00:13:04.430 "filename": "/dev/nullb0", 00:13:04.430 "name": "null0" 00:13:04.430 }, 00:13:04.430 "method": "bdev_xnvme_create" 00:13:04.430 }, 00:13:04.430 { 00:13:04.430 "method": "bdev_wait_for_examine" 00:13:04.430 } 00:13:04.430 ] 00:13:04.430 } 00:13:04.430 ] 00:13:04.430 } 00:13:04.430 [2024-11-05 11:29:03.623136] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:04.430 [2024-11-05 11:29:03.623246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68879 ] 00:13:04.689 [2024-11-05 11:29:03.779866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.689 [2024-11-05 11:29:03.854739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.946 Running I/O for 5 seconds... 00:13:06.814 201472.00 IOPS, 787.00 MiB/s [2024-11-05T11:29:07.462Z] 201536.00 IOPS, 787.25 MiB/s [2024-11-05T11:29:08.396Z] 201642.67 IOPS, 787.67 MiB/s [2024-11-05T11:29:09.330Z] 201696.00 IOPS, 787.88 MiB/s 00:13:10.056 Latency(us) 00:13:10.056 [2024-11-05T11:29:09.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.056 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.056 null0 : 5.00 201687.69 787.84 0.00 0.00 315.05 109.49 1562.78 00:13:10.056 [2024-11-05T11:29:09.330Z] =================================================================================================================== 00:13:10.056 [2024-11-05T11:29:09.330Z] Total : 201687.69 787.84 0.00 0.00 315.05 109.49 1562.78 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.621 11:29:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.621 { 00:13:10.622 "subsystems": [ 00:13:10.622 { 00:13:10.622 "subsystem": "bdev", 00:13:10.622 "config": [ 00:13:10.622 { 00:13:10.622 "params": { 00:13:10.622 "io_mechanism": "io_uring", 00:13:10.622 "filename": "/dev/nullb0", 00:13:10.622 "name": "null0" 00:13:10.622 }, 00:13:10.622 "method": "bdev_xnvme_create" 00:13:10.622 }, 00:13:10.622 { 00:13:10.622 "method": "bdev_wait_for_examine" 00:13:10.622 } 00:13:10.622 ] 00:13:10.622 } 00:13:10.622 ] 00:13:10.622 } 00:13:10.622 [2024-11-05 11:29:09.692285] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:10.622 [2024-11-05 11:29:09.692396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68948 ] 00:13:10.622 [2024-11-05 11:29:09.845176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.879 [2024-11-05 11:29:09.924175] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.879 Running I/O for 5 seconds... 00:13:13.196 231168.00 IOPS, 903.00 MiB/s [2024-11-05T11:29:13.405Z] 231040.00 IOPS, 902.50 MiB/s [2024-11-05T11:29:14.340Z] 230997.33 IOPS, 902.33 MiB/s [2024-11-05T11:29:15.275Z] 231008.00 IOPS, 902.38 MiB/s 00:13:16.001 Latency(us) 00:13:16.001 [2024-11-05T11:29:15.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.001 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.001 null0 : 5.00 231002.02 902.35 0.00 0.00 274.78 146.51 1524.97 00:13:16.001 [2024-11-05T11:29:15.275Z] =================================================================================================================== 00:13:16.001 [2024-11-05T11:29:15.275Z] Total : 231002.02 902.35 0.00 0.00 274.78 146.51 1524.97 00:13:16.568 11:29:15 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:16.568 11:29:15 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:16.568 00:13:16.568 real 0m12.161s 00:13:16.568 user 0m9.818s 00:13:16.568 sys 0m2.105s 00:13:16.568 11:29:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.568 11:29:15 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:16.568 ************************************ 00:13:16.568 END TEST xnvme_bdevperf 00:13:16.568 ************************************ 00:13:16.568 00:13:16.568 real 0m37.917s 00:13:16.568 user 0m32.461s 00:13:16.568 sys 0m4.704s 00:13:16.568 11:29:15 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:16.568 11:29:15 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.568 ************************************ 00:13:16.568 END TEST nvme_xnvme 00:13:16.568 ************************************ 00:13:16.568 11:29:15 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:16.568 11:29:15 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:16.568 11:29:15 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:16.568 11:29:15 -- common/autotest_common.sh@10 -- # set +x 00:13:16.568 ************************************ 00:13:16.568 START TEST blockdev_xnvme 00:13:16.568 ************************************ 00:13:16.568 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:16.827 * Looking for test storage... 00:13:16.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:16.827 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.828 11:29:15 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:16.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.828 --rc genhtml_branch_coverage=1 00:13:16.828 --rc genhtml_function_coverage=1 00:13:16.828 --rc genhtml_legend=1 00:13:16.828 --rc geninfo_all_blocks=1 00:13:16.828 --rc geninfo_unexecuted_blocks=1 00:13:16.828 00:13:16.828 ' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:16.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.828 --rc genhtml_branch_coverage=1 00:13:16.828 --rc genhtml_function_coverage=1 00:13:16.828 --rc genhtml_legend=1 00:13:16.828 --rc geninfo_all_blocks=1 00:13:16.828 --rc geninfo_unexecuted_blocks=1 00:13:16.828 00:13:16.828 ' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:16.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.828 --rc genhtml_branch_coverage=1 00:13:16.828 --rc genhtml_function_coverage=1 00:13:16.828 --rc genhtml_legend=1 00:13:16.828 --rc geninfo_all_blocks=1 00:13:16.828 --rc geninfo_unexecuted_blocks=1 00:13:16.828 00:13:16.828 ' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:16.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.828 --rc genhtml_branch_coverage=1 00:13:16.828 --rc genhtml_function_coverage=1 00:13:16.828 --rc genhtml_legend=1 00:13:16.828 --rc geninfo_all_blocks=1 00:13:16.828 --rc geninfo_unexecuted_blocks=1 00:13:16.828 00:13:16.828 ' 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:16.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69090 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69090 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 69090 ']' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:16.828 11:29:15 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:16.828 11:29:15 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:16.828 [2024-11-05 11:29:16.015531] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:16.828 [2024-11-05 11:29:16.015646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69090 ] 00:13:17.090 [2024-11-05 11:29:16.171729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.090 [2024-11-05 11:29:16.247837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:17.665 11:29:16 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:17.665 11:29:16 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:13:17.665 11:29:16 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:17.665 11:29:16 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:17.665 11:29:16 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:17.665 11:29:16 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:17.665 11:29:16 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:17.923 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.181 Waiting for block devices as requested 00:13:18.181 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.181 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.181 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.440 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:23.725 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:23.725 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:23.725 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:23.726 nvme0n1 00:13:23.726 nvme1n1 00:13:23.726 nvme2n1 00:13:23.726 nvme2n2 00:13:23.726 nvme2n3 00:13:23.726 nvme3n1 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:23.726 11:29:22 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:23.726 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:23.727 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a44e53f6-7ed6-4ba5-9534-3895b0041119"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a44e53f6-7ed6-4ba5-9534-3895b0041119",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fa4dd0e7-28ef-4ce1-8091-71c19c11cdbf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa4dd0e7-28ef-4ce1-8091-71c19c11cdbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "35389fda-b187-43e5-95c5-d6f55390b2a9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "35389fda-b187-43e5-95c5-d6f55390b2a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "cd66ad52-0e94-42e1-80da-33034d1e26fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd66ad52-0e94-42e1-80da-33034d1e26fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "e9985597-c766-4a93-b96b-d3cd91cfde7b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9985597-c766-4a93-b96b-d3cd91cfde7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9186a195-2bed-4fc3-9add-f9db95af011f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9186a195-2bed-4fc3-9add-f9db95af011f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:23.727 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:23.727 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:23.727 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:23.727 11:29:22 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69090 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 69090 ']' 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 69090 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69090 00:13:23.727 killing process with pid 69090 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69090' 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 69090 00:13:23.727 11:29:22 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 69090 00:13:24.666 11:29:23 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:24.666 11:29:23 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:24.666 11:29:23 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:13:24.666 11:29:23 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:24.666 11:29:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.666 ************************************ 00:13:24.666 START TEST bdev_hello_world 00:13:24.666 ************************************ 00:13:24.666 11:29:23 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:24.927 [2024-11-05 11:29:23.993761] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:24.927 [2024-11-05 11:29:23.993901] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69449 ] 00:13:24.927 [2024-11-05 11:29:24.152197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.187 [2024-11-05 11:29:24.238851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.447 [2024-11-05 11:29:24.541145] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:25.447 [2024-11-05 11:29:24.541349] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:25.447 [2024-11-05 11:29:24.541371] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:25.447 [2024-11-05 11:29:24.543275] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:25.447 [2024-11-05 11:29:24.543815] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:25.447 [2024-11-05 11:29:24.543835] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:25.447 [2024-11-05 11:29:24.544469] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:25.447 00:13:25.447 [2024-11-05 11:29:24.544546] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:26.384 ************************************ 00:13:26.384 00:13:26.384 real 0m1.364s 00:13:26.384 user 0m1.065s 00:13:26.384 sys 0m0.165s 00:13:26.384 11:29:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:26.384 11:29:25 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:26.384 END TEST bdev_hello_world 00:13:26.384 ************************************ 00:13:26.384 11:29:25 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:26.384 11:29:25 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:26.384 11:29:25 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:26.384 11:29:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:26.384 ************************************ 00:13:26.384 START TEST bdev_bounds 00:13:26.384 ************************************ 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:13:26.384 Process bdevio pid: 69480 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=69480 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 69480' 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 69480 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 69480 ']' 00:13:26.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:26.384 11:29:25 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:26.384 [2024-11-05 11:29:25.418322] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:26.384 [2024-11-05 11:29:25.418881] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69480 ] 00:13:26.384 [2024-11-05 11:29:25.578042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:26.642 [2024-11-05 11:29:25.676927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:26.642 [2024-11-05 11:29:25.677219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:26.642 [2024-11-05 11:29:25.677290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.207 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:27.207 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:13:27.207 11:29:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:27.207 I/O targets: 00:13:27.207 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:27.207 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:27.207 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:27.207 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:27.208 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:27.208 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:27.208 00:13:27.208 00:13:27.208 CUnit - A unit testing framework for C - Version 2.1-3 00:13:27.208 http://cunit.sourceforge.net/ 00:13:27.208 00:13:27.208 00:13:27.208 Suite: bdevio tests on: nvme3n1 00:13:27.208 Test: blockdev write read block ...passed 00:13:27.208 Test: blockdev write zeroes read block ...passed 00:13:27.208 Test: blockdev write zeroes read no split ...passed 00:13:27.208 Test: blockdev write zeroes read split ...passed 00:13:27.208 Test: blockdev write zeroes read split partial ...passed 00:13:27.208 Test: blockdev reset ...passed 00:13:27.208 Test: blockdev write read 8 blocks ...passed 00:13:27.208 Test: blockdev write read size > 128k ...passed 00:13:27.208 Test: blockdev write read invalid size ...passed 00:13:27.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.208 Test: blockdev write read max offset ...passed 00:13:27.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.208 Test: blockdev writev readv 8 blocks ...passed 00:13:27.208 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.208 Test: blockdev writev readv block ...passed 00:13:27.208 Test: blockdev writev readv size > 128k ...passed 00:13:27.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.208 Test: blockdev comparev and writev ...passed 00:13:27.208 Test: blockdev nvme passthru rw ...passed 00:13:27.208 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.208 Test: blockdev nvme admin passthru ...passed 00:13:27.208 Test: blockdev copy ...passed 00:13:27.208 Suite: bdevio tests on: nvme2n3 00:13:27.208 Test: blockdev write read block ...passed 00:13:27.208 Test: blockdev write zeroes read block ...passed 00:13:27.208 Test: blockdev write zeroes read no split ...passed 00:13:27.208 Test: blockdev write zeroes read split ...passed 00:13:27.208 Test: blockdev write zeroes read split partial ...passed 00:13:27.208 Test: blockdev reset ...passed 00:13:27.208 Test: blockdev write read 8 blocks ...passed 00:13:27.208 Test: blockdev write read size > 128k ...passed 00:13:27.208 Test: blockdev write read invalid size ...passed 00:13:27.208 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.208 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.208 Test: blockdev write read max offset ...passed 00:13:27.208 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.208 Test: blockdev writev readv 8 blocks ...passed 00:13:27.208 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.208 Test: blockdev writev readv block ...passed 00:13:27.208 Test: blockdev writev readv size > 128k ...passed 00:13:27.208 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.466 Test: blockdev comparev and writev ...passed 00:13:27.466 Test: blockdev nvme passthru rw ...passed 00:13:27.466 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.466 Test: blockdev nvme admin passthru ...passed 00:13:27.466 Test: blockdev copy ...passed 00:13:27.466 Suite: bdevio tests on: nvme2n2 00:13:27.466 Test: blockdev write read block ...passed 00:13:27.466 Test: blockdev write zeroes read block ...passed 00:13:27.466 Test: blockdev write zeroes read no split ...passed 00:13:27.466 Test: blockdev write zeroes read split ...passed 00:13:27.466 Test: blockdev write zeroes read split partial ...passed 00:13:27.466 Test: blockdev reset ...passed 00:13:27.466 Test: blockdev write read 8 blocks ...passed 00:13:27.466 Test: blockdev write read size > 128k ...passed 00:13:27.466 Test: blockdev write read invalid size ...passed 00:13:27.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.466 Test: blockdev write read max offset ...passed 00:13:27.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.466 Test: blockdev writev readv 8 blocks ...passed 00:13:27.466 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.466 Test: blockdev writev readv block ...passed 00:13:27.466 Test: blockdev writev readv size > 128k ...passed 00:13:27.466 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.466 Test: blockdev comparev and writev ...passed 00:13:27.466 Test: blockdev nvme passthru rw ...passed 00:13:27.466 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.466 Test: blockdev nvme admin passthru ...passed 00:13:27.466 Test: blockdev copy ...passed 00:13:27.466 Suite: bdevio tests on: nvme2n1 00:13:27.466 Test: blockdev write read block ...passed 00:13:27.466 Test: blockdev write zeroes read block ...passed 00:13:27.466 Test: blockdev write zeroes read no split ...passed 00:13:27.466 Test: blockdev write zeroes read split ...passed 00:13:27.466 Test: blockdev write zeroes read split partial ...passed 00:13:27.466 Test: blockdev reset ...passed 00:13:27.466 Test: blockdev write read 8 blocks ...passed 00:13:27.466 Test: blockdev write read size > 128k ...passed 00:13:27.466 Test: blockdev write read invalid size ...passed 00:13:27.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.466 Test: blockdev write read max offset ...passed 00:13:27.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.466 Test: blockdev writev readv 8 blocks ...passed 00:13:27.466 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.466 Test: blockdev writev readv block ...passed 00:13:27.466 Test: blockdev writev readv size > 128k ...passed 00:13:27.466 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.466 Test: blockdev comparev and writev ...passed 00:13:27.466 Test: blockdev nvme passthru rw ...passed 00:13:27.466 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.466 Test: blockdev nvme admin passthru ...passed 00:13:27.466 Test: blockdev copy ...passed 00:13:27.466 Suite: bdevio tests on: nvme1n1 00:13:27.466 Test: blockdev write read block ...passed 00:13:27.466 Test: blockdev write zeroes read block ...passed 00:13:27.466 Test: blockdev write zeroes read no split ...passed 00:13:27.466 Test: blockdev write zeroes read split ...passed 00:13:27.466 Test: blockdev write zeroes read split partial ...passed 00:13:27.466 Test: blockdev reset ...passed 00:13:27.466 Test: blockdev write read 8 blocks ...passed 00:13:27.466 Test: blockdev write read size > 128k ...passed 00:13:27.466 Test: blockdev write read invalid size ...passed 00:13:27.466 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.466 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.466 Test: blockdev write read max offset ...passed 00:13:27.466 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.466 Test: blockdev writev readv 8 blocks ...passed 00:13:27.466 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.467 Test: blockdev writev readv block ...passed 00:13:27.467 Test: blockdev writev readv size > 128k ...passed 00:13:27.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.467 Test: blockdev comparev and writev ...passed 00:13:27.467 Test: blockdev nvme passthru rw ...passed 00:13:27.467 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.467 Test: blockdev nvme admin passthru ...passed 00:13:27.467 Test: blockdev copy ...passed 00:13:27.467 Suite: bdevio tests on: nvme0n1 00:13:27.467 Test: blockdev write read block ...passed 00:13:27.467 Test: blockdev write zeroes read block ...passed 00:13:27.467 Test: blockdev write zeroes read no split ...passed 00:13:27.467 Test: blockdev write zeroes read split ...passed 00:13:27.725 Test: blockdev write zeroes read split partial ...passed 00:13:27.725 Test: blockdev reset ...passed 00:13:27.725 Test: blockdev write read 8 blocks ...passed 00:13:27.725 Test: blockdev write read size > 128k ...passed 00:13:27.725 Test: blockdev write read invalid size ...passed 00:13:27.725 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:27.725 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:27.725 Test: blockdev write read max offset ...passed 00:13:27.725 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:27.725 Test: blockdev writev readv 8 blocks ...passed 00:13:27.725 Test: blockdev writev readv 30 x 1block ...passed 00:13:27.725 Test: blockdev writev readv block ...passed 00:13:27.725 Test: blockdev writev readv size > 128k ...passed 00:13:27.725 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:27.725 Test: blockdev comparev and writev ...passed 00:13:27.725 Test: blockdev nvme passthru rw ...passed 00:13:27.725 Test: blockdev nvme passthru vendor specific ...passed 00:13:27.725 Test: blockdev nvme admin passthru ...passed 00:13:27.725 Test: blockdev copy ...passed 00:13:27.725 00:13:27.725 Run Summary: Type Total Ran Passed Failed Inactive 00:13:27.725 suites 6 6 n/a 0 0 00:13:27.725 tests 138 138 138 0 0 00:13:27.725 asserts 780 780 780 0 n/a 00:13:27.725 00:13:27.725 Elapsed time = 1.077 seconds 00:13:27.725 0 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 69480 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 69480 ']' 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 69480 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69480 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69480' 00:13:27.725 killing process with pid 69480 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 69480 00:13:27.725 11:29:26 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 69480 00:13:28.291 11:29:27 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:28.291 00:13:28.291 real 0m2.139s 00:13:28.291 user 0m5.308s 00:13:28.291 sys 0m0.288s 00:13:28.291 11:29:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:28.291 11:29:27 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:28.291 ************************************ 00:13:28.291 END TEST bdev_bounds 00:13:28.291 ************************************ 00:13:28.291 11:29:27 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:28.291 11:29:27 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:13:28.291 11:29:27 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:28.291 11:29:27 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:28.291 ************************************ 00:13:28.291 START TEST bdev_nbd 00:13:28.291 ************************************ 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=69538 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 69538 /var/tmp/spdk-nbd.sock 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 69538 ']' 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:28.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:28.291 11:29:27 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:28.549 [2024-11-05 11:29:27.608827] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:28.549 [2024-11-05 11:29:27.608941] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.549 [2024-11-05 11:29:27.768830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:28.807 [2024-11-05 11:29:27.865723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:29.396 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.654 1+0 records in 00:13:29.654 1+0 records out 00:13:29.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00062977 s, 6.5 MB/s 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.654 1+0 records in 00:13:29.654 1+0 records out 00:13:29.654 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647577 s, 6.3 MB/s 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:29.654 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.912 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.912 11:29:28 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:29.912 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.912 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:29.912 11:29:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:29.912 1+0 records in 00:13:29.912 1+0 records out 00:13:29.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000472633 s, 8.7 MB/s 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:29.912 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:29.913 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:29.913 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:29.913 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:29.913 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:29.913 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.171 1+0 records in 00:13:30.171 1+0 records out 00:13:30.171 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000482171 s, 8.5 MB/s 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.171 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.429 1+0 records in 00:13:30.429 1+0 records out 00:13:30.429 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010011 s, 4.1 MB/s 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.429 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.686 1+0 records in 00:13:30.686 1+0 records out 00:13:30.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909746 s, 4.5 MB/s 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.686 11:29:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd0", 00:13:30.944 "bdev_name": "nvme0n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd1", 00:13:30.944 "bdev_name": "nvme1n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd2", 00:13:30.944 "bdev_name": "nvme2n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd3", 00:13:30.944 "bdev_name": "nvme2n2" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd4", 00:13:30.944 "bdev_name": "nvme2n3" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd5", 00:13:30.944 "bdev_name": "nvme3n1" 00:13:30.944 } 00:13:30.944 ]' 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd0", 00:13:30.944 "bdev_name": "nvme0n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd1", 00:13:30.944 "bdev_name": "nvme1n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd2", 00:13:30.944 "bdev_name": "nvme2n1" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd3", 00:13:30.944 "bdev_name": "nvme2n2" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd4", 00:13:30.944 "bdev_name": "nvme2n3" 00:13:30.944 }, 00:13:30.944 { 00:13:30.944 "nbd_device": "/dev/nbd5", 00:13:30.944 "bdev_name": "nvme3n1" 00:13:30.944 } 00:13:30.944 ]' 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:30.944 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.202 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:31.460 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.718 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.719 11:29:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:31.976 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:31.977 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.234 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.491 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.492 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:32.749 /dev/nbd0 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:32.749 1+0 records in 00:13:32.749 1+0 records out 00:13:32.749 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000943011 s, 4.3 MB/s 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:32.749 11:29:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:33.008 /dev/nbd1 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.008 1+0 records in 00:13:33.008 1+0 records out 00:13:33.008 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00101548 s, 4.0 MB/s 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.008 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:33.266 /dev/nbd10 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.266 1+0 records in 00:13:33.266 1+0 records out 00:13:33.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000861466 s, 4.8 MB/s 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.266 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:33.524 /dev/nbd11 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.524 1+0 records in 00:13:33.524 1+0 records out 00:13:33.524 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000974779 s, 4.2 MB/s 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.524 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:33.783 /dev/nbd12 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.783 1+0 records in 00:13:33.783 1+0 records out 00:13:33.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000960788 s, 4.3 MB/s 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.783 11:29:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:33.783 /dev/nbd13 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:13:33.783 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.042 1+0 records in 00:13:34.042 1+0 records out 00:13:34.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000941776 s, 4.3 MB/s 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd0", 00:13:34.042 "bdev_name": "nvme0n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd1", 00:13:34.042 "bdev_name": "nvme1n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd10", 00:13:34.042 "bdev_name": "nvme2n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd11", 00:13:34.042 "bdev_name": "nvme2n2" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd12", 00:13:34.042 "bdev_name": "nvme2n3" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd13", 00:13:34.042 "bdev_name": "nvme3n1" 00:13:34.042 } 00:13:34.042 ]' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd0", 00:13:34.042 "bdev_name": "nvme0n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd1", 00:13:34.042 "bdev_name": "nvme1n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd10", 00:13:34.042 "bdev_name": "nvme2n1" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd11", 00:13:34.042 "bdev_name": "nvme2n2" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd12", 00:13:34.042 "bdev_name": "nvme2n3" 00:13:34.042 }, 00:13:34.042 { 00:13:34.042 "nbd_device": "/dev/nbd13", 00:13:34.042 "bdev_name": "nvme3n1" 00:13:34.042 } 00:13:34.042 ]' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:34.042 /dev/nbd1 00:13:34.042 /dev/nbd10 00:13:34.042 /dev/nbd11 00:13:34.042 /dev/nbd12 00:13:34.042 /dev/nbd13' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:34.042 /dev/nbd1 00:13:34.042 /dev/nbd10 00:13:34.042 /dev/nbd11 00:13:34.042 /dev/nbd12 00:13:34.042 /dev/nbd13' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:34.042 256+0 records in 00:13:34.042 256+0 records out 00:13:34.042 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00469352 s, 223 MB/s 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:34.042 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:34.300 256+0 records in 00:13:34.300 256+0 records out 00:13:34.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189568 s, 5.5 MB/s 00:13:34.300 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:34.300 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:34.558 256+0 records in 00:13:34.558 256+0 records out 00:13:34.558 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257705 s, 4.1 MB/s 00:13:34.558 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:34.558 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:34.816 256+0 records in 00:13:34.816 256+0 records out 00:13:34.816 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.186727 s, 5.6 MB/s 00:13:34.816 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:34.816 11:29:33 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:35.075 256+0 records in 00:13:35.075 256+0 records out 00:13:35.075 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210299 s, 5.0 MB/s 00:13:35.075 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:35.075 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:35.333 256+0 records in 00:13:35.333 256+0 records out 00:13:35.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.21263 s, 4.9 MB/s 00:13:35.333 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:35.333 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:35.333 256+0 records in 00:13:35.333 256+0 records out 00:13:35.333 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.189126 s, 5.5 MB/s 00:13:35.333 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.334 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.592 11:29:34 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:35.850 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.108 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.369 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.633 11:29:35 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:36.899 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:36.899 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:36.900 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:37.160 malloc_lvol_verify 00:13:37.160 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:37.421 0dada390-0eb2-4f19-928e-b6a6e895514b 00:13:37.421 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:37.682 1d344868-f524-43aa-a17d-e4261a6c1248 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:37.682 /dev/nbd0 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:37.682 mke2fs 1.47.0 (5-Feb-2023) 00:13:37.682 Discarding device blocks: 0/4096 done 00:13:37.682 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:37.682 00:13:37.682 Allocating group tables: 0/1 done 00:13:37.682 Writing inode tables: 0/1 done 00:13:37.682 Creating journal (1024 blocks): done 00:13:37.682 Writing superblocks and filesystem accounting information: 0/1 done 00:13:37.682 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.682 11:29:36 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 69538 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 69538 ']' 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 69538 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 69538 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:37.941 killing process with pid 69538 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 69538' 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 69538 00:13:37.941 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 69538 00:13:38.509 ************************************ 00:13:38.509 END TEST bdev_nbd 00:13:38.509 ************************************ 00:13:38.510 11:29:37 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:38.510 00:13:38.510 real 0m10.209s 00:13:38.510 user 0m13.903s 00:13:38.510 sys 0m3.495s 00:13:38.510 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:38.510 11:29:37 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:38.770 11:29:37 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:38.770 11:29:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:38.770 11:29:37 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:38.770 11:29:37 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:38.770 11:29:37 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:38.770 11:29:37 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.770 11:29:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:38.770 ************************************ 00:13:38.770 START TEST bdev_fio 00:13:38.770 ************************************ 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:13:38.770 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:38.770 ************************************ 00:13:38.770 START TEST bdev_fio_rw_verify 00:13:38.770 ************************************ 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:13:38.770 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:38.771 11:29:37 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:39.031 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:39.031 fio-3.35 00:13:39.031 Starting 6 threads 00:13:51.240 00:13:51.240 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=69939: Tue Nov 5 11:29:48 2024 00:13:51.240 read: IOPS=31.0k, BW=121MiB/s (127MB/s)(1211MiB/10003msec) 00:13:51.240 slat (usec): min=2, max=2511, avg= 4.59, stdev=10.29 00:13:51.240 clat (usec): min=52, max=7628, avg=522.50, stdev=521.53 00:13:51.240 lat (usec): min=56, max=7641, avg=527.09, stdev=522.22 00:13:51.240 clat percentiles (usec): 00:13:51.240 | 50.000th=[ 351], 99.000th=[ 2671], 99.900th=[ 3949], 99.990th=[ 5211], 00:13:51.240 | 99.999th=[ 7570] 00:13:51.240 write: IOPS=31.1k, BW=122MiB/s (128MB/s)(1217MiB/10003msec); 0 zone resets 00:13:51.240 slat (usec): min=3, max=6708, avg=25.71, stdev=80.61 00:13:51.240 clat (usec): min=55, max=94216, avg=797.61, stdev=1123.03 00:13:51.240 lat (usec): min=80, max=94232, avg=823.32, stdev=1130.68 00:13:51.240 clat percentiles (usec): 00:13:51.240 | 50.000th=[ 478], 99.000th=[ 4293], 99.900th=[ 7701], 99.990th=[32375], 00:13:51.240 | 99.999th=[93848] 00:13:51.240 bw ( KiB/s): min=51672, max=209472, per=100.00%, avg=125189.37, stdev=7471.85, samples=114 00:13:51.240 iops : min=12916, max=52368, avg=31296.68, stdev=1868.02, samples=114 00:13:51.240 lat (usec) : 100=0.12%, 250=19.93%, 500=41.70%, 750=16.03%, 1000=6.01% 00:13:51.240 lat (msec) : 2=10.34%, 4=5.20%, 10=0.65%, 20=0.01%, 50=0.01% 00:13:51.240 lat (msec) : 100=0.01% 00:13:51.240 cpu : usr=48.71%, sys=29.80%, ctx=6175, majf=0, minf=25948 00:13:51.240 IO depths : 1=11.2%, 2=23.3%, 4=51.0%, 8=14.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:51.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.240 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:51.240 issued rwts: total=310001,311494,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:51.240 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:51.240 00:13:51.240 Run status group 0 (all jobs): 00:13:51.240 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=1211MiB (1270MB), run=10003-10003msec 00:13:51.240 WRITE: bw=122MiB/s (128MB/s), 122MiB/s-122MiB/s (128MB/s-128MB/s), io=1217MiB (1276MB), run=10003-10003msec 00:13:51.240 ----------------------------------------------------- 00:13:51.240 Suppressions used: 00:13:51.240 count bytes template 00:13:51.240 6 48 /usr/src/fio/parse.c 00:13:51.240 1254 120384 /usr/src/fio/iolog.c 00:13:51.240 1 8 libtcmalloc_minimal.so 00:13:51.240 1 904 libcrypto.so 00:13:51.241 ----------------------------------------------------- 00:13:51.241 00:13:51.241 00:13:51.241 real 0m11.798s 00:13:51.241 user 0m30.606s 00:13:51.241 sys 0m18.179s 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.241 ************************************ 00:13:51.241 END TEST bdev_fio_rw_verify 00:13:51.241 ************************************ 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "a44e53f6-7ed6-4ba5-9534-3895b0041119"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a44e53f6-7ed6-4ba5-9534-3895b0041119",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "fa4dd0e7-28ef-4ce1-8091-71c19c11cdbf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fa4dd0e7-28ef-4ce1-8091-71c19c11cdbf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "35389fda-b187-43e5-95c5-d6f55390b2a9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "35389fda-b187-43e5-95c5-d6f55390b2a9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "cd66ad52-0e94-42e1-80da-33034d1e26fc"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cd66ad52-0e94-42e1-80da-33034d1e26fc",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "e9985597-c766-4a93-b96b-d3cd91cfde7b"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e9985597-c766-4a93-b96b-d3cd91cfde7b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "9186a195-2bed-4fc3-9add-f9db95af011f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "9186a195-2bed-4fc3-9add-f9db95af011f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:51.241 /home/vagrant/spdk_repo/spdk 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:51.241 ************************************ 00:13:51.241 END TEST bdev_fio 00:13:51.241 ************************************ 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:51.241 00:13:51.241 real 0m11.953s 00:13:51.241 user 0m30.675s 00:13:51.241 sys 0m18.248s 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:51.241 11:29:49 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 11:29:49 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:51.241 11:29:49 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:51.241 11:29:49 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:51.241 11:29:49 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:51.241 11:29:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 ************************************ 00:13:51.241 START TEST bdev_verify 00:13:51.241 ************************************ 00:13:51.241 11:29:49 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:51.241 [2024-11-05 11:29:49.878655] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:51.241 [2024-11-05 11:29:49.878772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70108 ] 00:13:51.241 [2024-11-05 11:29:50.039266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.241 [2024-11-05 11:29:50.141031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.241 [2024-11-05 11:29:50.141125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:51.241 Running I/O for 5 seconds... 00:13:53.550 25952.00 IOPS, 101.38 MiB/s [2024-11-05T11:29:53.757Z] 24960.00 IOPS, 97.50 MiB/s [2024-11-05T11:29:55.130Z] 24896.00 IOPS, 97.25 MiB/s [2024-11-05T11:29:55.697Z] 24896.00 IOPS, 97.25 MiB/s 00:13:56.423 Latency(us) 00:13:56.423 [2024-11-05T11:29:55.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.423 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0xa0000 00:13:56.423 nvme0n1 : 5.05 1775.95 6.94 0.00 0.00 71940.18 10082.46 62511.26 00:13:56.423 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0xa0000 length 0xa0000 00:13:56.423 nvme0n1 : 5.03 1857.82 7.26 0.00 0.00 68775.95 10334.52 68157.44 00:13:56.423 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0xbd0bd 00:13:56.423 nvme1n1 : 5.06 3005.88 11.74 0.00 0.00 42231.05 4083.40 59688.17 00:13:56.423 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:56.423 nvme1n1 : 5.03 2950.85 11.53 0.00 0.00 43156.99 3982.57 61301.37 00:13:56.423 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0x80000 00:13:56.423 nvme2n1 : 5.03 1780.74 6.96 0.00 0.00 71389.47 7360.20 77030.01 00:13:56.423 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x80000 length 0x80000 00:13:56.423 nvme2n1 : 5.06 1898.44 7.42 0.00 0.00 67019.23 4310.25 67350.84 00:13:56.423 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0x80000 00:13:56.423 nvme2n2 : 5.07 1793.68 7.01 0.00 0.00 70715.57 3806.13 64931.05 00:13:56.423 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x80000 length 0x80000 00:13:56.423 nvme2n2 : 5.06 1871.07 7.31 0.00 0.00 67852.97 7158.55 59688.17 00:13:56.423 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0x80000 00:13:56.423 nvme2n3 : 5.07 1793.19 7.00 0.00 0.00 70590.08 4436.28 70577.23 00:13:56.423 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x80000 length 0x80000 00:13:56.423 nvme2n3 : 5.06 1870.53 7.31 0.00 0.00 67737.44 4360.66 67754.14 00:13:56.423 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x0 length 0x20000 00:13:56.423 nvme3n1 : 5.07 1794.24 7.01 0.00 0.00 70435.38 3503.66 67350.84 00:13:56.423 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:56.423 Verification LBA range: start 0x20000 length 0x20000 00:13:56.423 nvme3n1 : 5.06 1871.60 7.31 0.00 0.00 67592.70 6553.60 65737.65 00:13:56.423 [2024-11-05T11:29:55.697Z] =================================================================================================================== 00:13:56.423 [2024-11-05T11:29:55.697Z] Total : 24263.99 94.78 0.00 0.00 62824.82 3503.66 77030.01 00:13:57.357 00:13:57.357 real 0m6.510s 00:13:57.357 user 0m10.556s 00:13:57.357 sys 0m1.552s 00:13:57.357 11:29:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.357 11:29:56 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:57.357 ************************************ 00:13:57.357 END TEST bdev_verify 00:13:57.357 ************************************ 00:13:57.358 11:29:56 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.358 11:29:56 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:13:57.358 11:29:56 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.358 11:29:56 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.358 ************************************ 00:13:57.358 START TEST bdev_verify_big_io 00:13:57.358 ************************************ 00:13:57.358 11:29:56 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:57.358 [2024-11-05 11:29:56.432022] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:13:57.358 [2024-11-05 11:29:56.432134] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70201 ] 00:13:57.358 [2024-11-05 11:29:56.585579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:57.615 [2024-11-05 11:29:56.680549] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.615 [2024-11-05 11:29:56.680623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.873 Running I/O for 5 seconds... 00:14:04.432 880.00 IOPS, 55.00 MiB/s [2024-11-05T11:30:03.706Z] 2637.50 IOPS, 164.84 MiB/s [2024-11-05T11:30:03.706Z] 3100.67 IOPS, 193.79 MiB/s 00:14:04.432 Latency(us) 00:14:04.432 [2024-11-05T11:30:03.706Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:04.432 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0xa000 00:14:04.432 nvme0n1 : 5.71 112.17 7.01 0.00 0.00 1077408.06 152446.82 1245385.65 00:14:04.432 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0xa000 length 0xa000 00:14:04.432 nvme0n1 : 6.09 115.65 7.23 0.00 0.00 1066297.07 221007.56 1245385.65 00:14:04.432 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0xbd0b 00:14:04.432 nvme1n1 : 5.90 195.21 12.20 0.00 0.00 595895.23 13510.50 1032444.06 00:14:04.432 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:04.432 nvme1n1 : 6.09 157.57 9.85 0.00 0.00 755446.68 10637.00 1045349.61 00:14:04.432 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0x8000 00:14:04.432 nvme2n1 : 5.97 126.89 7.93 0.00 0.00 913078.89 70173.93 1742249.35 00:14:04.432 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x8000 length 0x8000 00:14:04.432 nvme2n1 : 6.13 86.15 5.38 0.00 0.00 1339607.78 168578.76 2542393.50 00:14:04.432 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0x8000 00:14:04.432 nvme2n2 : 6.01 103.82 6.49 0.00 0.00 1071515.22 102841.11 2090699.22 00:14:04.432 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x8000 length 0x8000 00:14:04.432 nvme2n2 : 6.13 144.87 9.05 0.00 0.00 767144.75 33272.12 774333.05 00:14:04.432 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0x8000 00:14:04.432 nvme2n3 : 6.09 134.06 8.38 0.00 0.00 798380.89 8318.03 1897115.96 00:14:04.432 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x8000 length 0x8000 00:14:04.432 nvme2n3 : 6.12 94.07 5.88 0.00 0.00 1141438.97 118569.75 1948738.17 00:14:04.432 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x0 length 0x2000 00:14:04.432 nvme3n1 : 6.10 144.38 9.02 0.00 0.00 719496.45 1159.48 2000360.37 00:14:04.432 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:04.432 Verification LBA range: start 0x2000 length 0x2000 00:14:04.432 nvme3n1 : 6.13 166.93 10.43 0.00 0.00 621861.66 1569.08 1051802.39 00:14:04.432 [2024-11-05T11:30:03.706Z] =================================================================================================================== 00:14:04.432 [2024-11-05T11:30:03.706Z] Total : 1581.77 98.86 0.00 0.00 856553.09 1159.48 2542393.50 00:14:04.690 00:14:04.690 real 0m7.578s 00:14:04.690 user 0m14.060s 00:14:04.690 sys 0m0.390s 00:14:04.690 11:30:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.690 11:30:03 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.690 ************************************ 00:14:04.690 END TEST bdev_verify_big_io 00:14:04.690 ************************************ 00:14:04.948 11:30:03 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.948 11:30:03 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:04.948 11:30:03 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.948 11:30:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:04.948 ************************************ 00:14:04.948 START TEST bdev_write_zeroes 00:14:04.948 ************************************ 00:14:04.948 11:30:04 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:04.948 [2024-11-05 11:30:04.062827] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:04.948 [2024-11-05 11:30:04.062940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70315 ] 00:14:04.948 [2024-11-05 11:30:04.220114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.210 [2024-11-05 11:30:04.294924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:05.472 Running I/O for 1 seconds... 00:14:06.410 84864.00 IOPS, 331.50 MiB/s 00:14:06.410 Latency(us) 00:14:06.410 [2024-11-05T11:30:05.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.410 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme0n1 : 1.02 13617.33 53.19 0.00 0.00 9391.48 4814.38 21072.34 00:14:06.410 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme1n1 : 1.02 16664.43 65.10 0.00 0.00 7669.15 3138.17 16333.59 00:14:06.410 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme2n1 : 1.02 13598.83 53.12 0.00 0.00 9350.05 5570.56 19055.85 00:14:06.410 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme2n2 : 1.02 13515.89 52.80 0.00 0.00 9403.74 5545.35 17039.36 00:14:06.410 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme2n3 : 1.02 13500.45 52.74 0.00 0.00 9409.29 5545.35 17039.36 00:14:06.410 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:06.410 nvme3n1 : 1.03 13485.30 52.68 0.00 0.00 9415.10 5494.94 18652.55 00:14:06.410 [2024-11-05T11:30:05.684Z] =================================================================================================================== 00:14:06.410 [2024-11-05T11:30:05.684Z] Total : 84382.24 329.62 0.00 0.00 9053.08 3138.17 21072.34 00:14:07.351 00:14:07.351 real 0m2.410s 00:14:07.351 user 0m1.822s 00:14:07.351 sys 0m0.428s 00:14:07.351 11:30:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.351 ************************************ 00:14:07.351 END TEST bdev_write_zeroes 00:14:07.351 ************************************ 00:14:07.351 11:30:06 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:07.351 11:30:06 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.351 11:30:06 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:07.351 11:30:06 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.351 11:30:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.351 ************************************ 00:14:07.351 START TEST bdev_json_nonenclosed 00:14:07.351 ************************************ 00:14:07.351 11:30:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.351 [2024-11-05 11:30:06.555008] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:07.351 [2024-11-05 11:30:06.555153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70361 ] 00:14:07.611 [2024-11-05 11:30:06.719900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.611 [2024-11-05 11:30:06.813873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.611 [2024-11-05 11:30:06.813944] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:07.611 [2024-11-05 11:30:06.813960] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:07.611 [2024-11-05 11:30:06.813968] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:07.872 00:14:07.872 real 0m0.506s 00:14:07.872 user 0m0.290s 00:14:07.872 sys 0m0.111s 00:14:07.872 ************************************ 00:14:07.872 11:30:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:07.872 11:30:06 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:07.872 END TEST bdev_json_nonenclosed 00:14:07.872 ************************************ 00:14:07.872 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.872 11:30:07 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:14:07.872 11:30:07 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:07.872 11:30:07 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.872 ************************************ 00:14:07.872 START TEST bdev_json_nonarray 00:14:07.872 ************************************ 00:14:07.872 11:30:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.872 [2024-11-05 11:30:07.109021] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:07.872 [2024-11-05 11:30:07.109135] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70387 ] 00:14:08.132 [2024-11-05 11:30:07.269333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.132 [2024-11-05 11:30:07.379045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.132 [2024-11-05 11:30:07.379161] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:08.132 [2024-11-05 11:30:07.379181] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:08.132 [2024-11-05 11:30:07.379192] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:08.423 00:14:08.423 real 0m0.521s 00:14:08.423 user 0m0.320s 00:14:08.423 sys 0m0.096s 00:14:08.423 ************************************ 00:14:08.423 END TEST bdev_json_nonarray 00:14:08.423 ************************************ 00:14:08.423 11:30:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:08.423 11:30:07 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:08.423 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:08.423 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:08.424 11:30:07 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:08.995 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:27.099 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.099 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.099 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.099 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:27.099 00:14:27.099 real 1m9.798s 00:14:27.099 user 1m26.905s 00:14:27.099 sys 0m55.440s 00:14:27.099 11:30:25 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:27.099 11:30:25 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:27.099 ************************************ 00:14:27.099 END TEST blockdev_xnvme 00:14:27.099 ************************************ 00:14:27.099 11:30:25 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:27.099 11:30:25 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:27.099 11:30:25 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:27.099 11:30:25 -- common/autotest_common.sh@10 -- # set +x 00:14:27.099 ************************************ 00:14:27.099 START TEST ublk 00:14:27.099 ************************************ 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:27.099 * Looking for test storage... 00:14:27.099 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:27.099 11:30:25 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:27.099 11:30:25 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:27.099 11:30:25 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:27.099 11:30:25 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:27.099 11:30:25 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:27.099 11:30:25 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:27.099 11:30:25 ublk -- scripts/common.sh@345 -- # : 1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:27.099 11:30:25 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:27.099 11:30:25 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@353 -- # local d=1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:27.099 11:30:25 ublk -- scripts/common.sh@355 -- # echo 1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:27.099 11:30:25 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@353 -- # local d=2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:27.099 11:30:25 ublk -- scripts/common.sh@355 -- # echo 2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:27.099 11:30:25 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:27.099 11:30:25 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:27.099 11:30:25 ublk -- scripts/common.sh@368 -- # return 0 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.099 --rc genhtml_branch_coverage=1 00:14:27.099 --rc genhtml_function_coverage=1 00:14:27.099 --rc genhtml_legend=1 00:14:27.099 --rc geninfo_all_blocks=1 00:14:27.099 --rc geninfo_unexecuted_blocks=1 00:14:27.099 00:14:27.099 ' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.099 --rc genhtml_branch_coverage=1 00:14:27.099 --rc genhtml_function_coverage=1 00:14:27.099 --rc genhtml_legend=1 00:14:27.099 --rc geninfo_all_blocks=1 00:14:27.099 --rc geninfo_unexecuted_blocks=1 00:14:27.099 00:14:27.099 ' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.099 --rc genhtml_branch_coverage=1 00:14:27.099 --rc genhtml_function_coverage=1 00:14:27.099 --rc genhtml_legend=1 00:14:27.099 --rc geninfo_all_blocks=1 00:14:27.099 --rc geninfo_unexecuted_blocks=1 00:14:27.099 00:14:27.099 ' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:27.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:27.099 --rc genhtml_branch_coverage=1 00:14:27.099 --rc genhtml_function_coverage=1 00:14:27.099 --rc genhtml_legend=1 00:14:27.099 --rc geninfo_all_blocks=1 00:14:27.099 --rc geninfo_unexecuted_blocks=1 00:14:27.099 00:14:27.099 ' 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:27.099 11:30:25 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:27.099 11:30:25 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:27.099 11:30:25 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:27.099 11:30:25 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:27.099 11:30:25 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:27.099 11:30:25 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:27.099 11:30:25 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:27.099 11:30:25 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:27.099 11:30:25 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:27.099 11:30:25 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.099 ************************************ 00:14:27.100 START TEST test_save_ublk_config 00:14:27.100 ************************************ 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=70678 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 70678 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70678 ']' 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:27.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:27.100 11:30:25 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:27.100 [2024-11-05 11:30:25.895555] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:27.100 [2024-11-05 11:30:25.895671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70678 ] 00:14:27.100 [2024-11-05 11:30:26.050519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:27.100 [2024-11-05 11:30:26.173866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.673 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:27.673 [2024-11-05 11:30:26.844825] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:27.673 [2024-11-05 11:30:26.845657] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:27.673 malloc0 00:14:27.673 [2024-11-05 11:30:26.908943] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:27.673 [2024-11-05 11:30:26.909024] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:27.673 [2024-11-05 11:30:26.909034] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:27.673 [2024-11-05 11:30:26.909042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.673 [2024-11-05 11:30:26.917927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.673 [2024-11-05 11:30:26.917958] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.673 [2024-11-05 11:30:26.924841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.673 [2024-11-05 11:30:26.924950] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:27.673 [2024-11-05 11:30:26.941832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.934 0 00:14:27.934 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.934 11:30:26 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:27.934 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.934 11:30:26 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:28.195 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.195 11:30:27 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:28.195 "subsystems": [ 00:14:28.195 { 00:14:28.195 "subsystem": "fsdev", 00:14:28.195 "config": [ 00:14:28.195 { 00:14:28.195 "method": "fsdev_set_opts", 00:14:28.195 "params": { 00:14:28.195 "fsdev_io_pool_size": 65535, 00:14:28.195 "fsdev_io_cache_size": 256 00:14:28.195 } 00:14:28.195 } 00:14:28.195 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "keyring", 00:14:28.196 "config": [] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "iobuf", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "iobuf_set_options", 00:14:28.196 "params": { 00:14:28.196 "small_pool_count": 8192, 00:14:28.196 "large_pool_count": 1024, 00:14:28.196 "small_bufsize": 8192, 00:14:28.196 "large_bufsize": 135168, 00:14:28.196 "enable_numa": false 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "sock", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "sock_set_default_impl", 00:14:28.196 "params": { 00:14:28.196 "impl_name": "posix" 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "sock_impl_set_options", 00:14:28.196 "params": { 00:14:28.196 "impl_name": "ssl", 00:14:28.196 "recv_buf_size": 4096, 00:14:28.196 "send_buf_size": 4096, 00:14:28.196 "enable_recv_pipe": true, 00:14:28.196 "enable_quickack": false, 00:14:28.196 "enable_placement_id": 0, 00:14:28.196 "enable_zerocopy_send_server": true, 00:14:28.196 "enable_zerocopy_send_client": false, 00:14:28.196 "zerocopy_threshold": 0, 00:14:28.196 "tls_version": 0, 00:14:28.196 "enable_ktls": false 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "sock_impl_set_options", 00:14:28.196 "params": { 00:14:28.196 "impl_name": "posix", 00:14:28.196 "recv_buf_size": 2097152, 00:14:28.196 "send_buf_size": 2097152, 00:14:28.196 "enable_recv_pipe": true, 00:14:28.196 "enable_quickack": false, 00:14:28.196 "enable_placement_id": 0, 00:14:28.196 "enable_zerocopy_send_server": true, 00:14:28.196 "enable_zerocopy_send_client": false, 00:14:28.196 "zerocopy_threshold": 0, 00:14:28.196 "tls_version": 0, 00:14:28.196 "enable_ktls": false 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "vmd", 00:14:28.196 "config": [] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "accel", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "accel_set_options", 00:14:28.196 "params": { 00:14:28.196 "small_cache_size": 128, 00:14:28.196 "large_cache_size": 16, 00:14:28.196 "task_count": 2048, 00:14:28.196 "sequence_count": 2048, 00:14:28.196 "buf_count": 2048 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "bdev", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "bdev_set_options", 00:14:28.196 "params": { 00:14:28.196 "bdev_io_pool_size": 65535, 00:14:28.196 "bdev_io_cache_size": 256, 00:14:28.196 "bdev_auto_examine": true, 00:14:28.196 "iobuf_small_cache_size": 128, 00:14:28.196 "iobuf_large_cache_size": 16 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_raid_set_options", 00:14:28.196 "params": { 00:14:28.196 "process_window_size_kb": 1024, 00:14:28.196 "process_max_bandwidth_mb_sec": 0 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_iscsi_set_options", 00:14:28.196 "params": { 00:14:28.196 "timeout_sec": 30 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_nvme_set_options", 00:14:28.196 "params": { 00:14:28.196 "action_on_timeout": "none", 00:14:28.196 "timeout_us": 0, 00:14:28.196 "timeout_admin_us": 0, 00:14:28.196 "keep_alive_timeout_ms": 10000, 00:14:28.196 "arbitration_burst": 0, 00:14:28.196 "low_priority_weight": 0, 00:14:28.196 "medium_priority_weight": 0, 00:14:28.196 "high_priority_weight": 0, 00:14:28.196 "nvme_adminq_poll_period_us": 10000, 00:14:28.196 "nvme_ioq_poll_period_us": 0, 00:14:28.196 "io_queue_requests": 0, 00:14:28.196 "delay_cmd_submit": true, 00:14:28.196 "transport_retry_count": 4, 00:14:28.196 "bdev_retry_count": 3, 00:14:28.196 "transport_ack_timeout": 0, 00:14:28.196 "ctrlr_loss_timeout_sec": 0, 00:14:28.196 "reconnect_delay_sec": 0, 00:14:28.196 "fast_io_fail_timeout_sec": 0, 00:14:28.196 "disable_auto_failback": false, 00:14:28.196 "generate_uuids": false, 00:14:28.196 "transport_tos": 0, 00:14:28.196 "nvme_error_stat": false, 00:14:28.196 "rdma_srq_size": 0, 00:14:28.196 "io_path_stat": false, 00:14:28.196 "allow_accel_sequence": false, 00:14:28.196 "rdma_max_cq_size": 0, 00:14:28.196 "rdma_cm_event_timeout_ms": 0, 00:14:28.196 "dhchap_digests": [ 00:14:28.196 "sha256", 00:14:28.196 "sha384", 00:14:28.196 "sha512" 00:14:28.196 ], 00:14:28.196 "dhchap_dhgroups": [ 00:14:28.196 "null", 00:14:28.196 "ffdhe2048", 00:14:28.196 "ffdhe3072", 00:14:28.196 "ffdhe4096", 00:14:28.196 "ffdhe6144", 00:14:28.196 "ffdhe8192" 00:14:28.196 ] 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_nvme_set_hotplug", 00:14:28.196 "params": { 00:14:28.196 "period_us": 100000, 00:14:28.196 "enable": false 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_malloc_create", 00:14:28.196 "params": { 00:14:28.196 "name": "malloc0", 00:14:28.196 "num_blocks": 8192, 00:14:28.196 "block_size": 4096, 00:14:28.196 "physical_block_size": 4096, 00:14:28.196 "uuid": "29e9a58f-eb67-425d-9025-cb4c2df29bb7", 00:14:28.196 "optimal_io_boundary": 0, 00:14:28.196 "md_size": 0, 00:14:28.196 "dif_type": 0, 00:14:28.196 "dif_is_head_of_md": false, 00:14:28.196 "dif_pi_format": 0 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "bdev_wait_for_examine" 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "scsi", 00:14:28.196 "config": null 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "scheduler", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "framework_set_scheduler", 00:14:28.196 "params": { 00:14:28.196 "name": "static" 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "vhost_scsi", 00:14:28.196 "config": [] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "vhost_blk", 00:14:28.196 "config": [] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "ublk", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "ublk_create_target", 00:14:28.196 "params": { 00:14:28.196 "cpumask": "1" 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "ublk_start_disk", 00:14:28.196 "params": { 00:14:28.196 "bdev_name": "malloc0", 00:14:28.196 "ublk_id": 0, 00:14:28.196 "num_queues": 1, 00:14:28.196 "queue_depth": 128 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "nbd", 00:14:28.196 "config": [] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "nvmf", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "nvmf_set_config", 00:14:28.196 "params": { 00:14:28.196 "discovery_filter": "match_any", 00:14:28.196 "admin_cmd_passthru": { 00:14:28.196 "identify_ctrlr": false 00:14:28.196 }, 00:14:28.196 "dhchap_digests": [ 00:14:28.196 "sha256", 00:14:28.196 "sha384", 00:14:28.196 "sha512" 00:14:28.196 ], 00:14:28.196 "dhchap_dhgroups": [ 00:14:28.196 "null", 00:14:28.196 "ffdhe2048", 00:14:28.196 "ffdhe3072", 00:14:28.196 "ffdhe4096", 00:14:28.196 "ffdhe6144", 00:14:28.196 "ffdhe8192" 00:14:28.196 ] 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "nvmf_set_max_subsystems", 00:14:28.196 "params": { 00:14:28.196 "max_subsystems": 1024 00:14:28.196 } 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "method": "nvmf_set_crdt", 00:14:28.196 "params": { 00:14:28.196 "crdt1": 0, 00:14:28.196 "crdt2": 0, 00:14:28.196 "crdt3": 0 00:14:28.196 } 00:14:28.196 } 00:14:28.196 ] 00:14:28.196 }, 00:14:28.196 { 00:14:28.196 "subsystem": "iscsi", 00:14:28.196 "config": [ 00:14:28.196 { 00:14:28.196 "method": "iscsi_set_options", 00:14:28.196 "params": { 00:14:28.196 "node_base": "iqn.2016-06.io.spdk", 00:14:28.196 "max_sessions": 128, 00:14:28.196 "max_connections_per_session": 2, 00:14:28.196 "max_queue_depth": 64, 00:14:28.196 "default_time2wait": 2, 00:14:28.196 "default_time2retain": 20, 00:14:28.196 "first_burst_length": 8192, 00:14:28.196 "immediate_data": true, 00:14:28.196 "allow_duplicated_isid": false, 00:14:28.196 "error_recovery_level": 0, 00:14:28.196 "nop_timeout": 60, 00:14:28.196 "nop_in_interval": 30, 00:14:28.196 "disable_chap": false, 00:14:28.196 "require_chap": false, 00:14:28.196 "mutual_chap": false, 00:14:28.196 "chap_group": 0, 00:14:28.197 "max_large_datain_per_connection": 64, 00:14:28.197 "max_r2t_per_connection": 4, 00:14:28.197 "pdu_pool_size": 36864, 00:14:28.197 "immediate_data_pool_size": 16384, 00:14:28.197 "data_out_pool_size": 2048 00:14:28.197 } 00:14:28.197 } 00:14:28.197 ] 00:14:28.197 } 00:14:28.197 ] 00:14:28.197 }' 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 70678 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70678 ']' 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70678 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70678 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:28.197 killing process with pid 70678 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70678' 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70678 00:14:28.197 11:30:27 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70678 00:14:29.586 [2024-11-05 11:30:28.549256] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:29.586 [2024-11-05 11:30:28.595891] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:29.586 [2024-11-05 11:30:28.595986] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:29.586 [2024-11-05 11:30:28.603840] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:29.586 [2024-11-05 11:30:28.603882] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:29.586 [2024-11-05 11:30:28.603891] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:29.586 [2024-11-05 11:30:28.603911] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:29.586 [2024-11-05 11:30:28.604018] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:30.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=70738 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 70738 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 70738 ']' 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:30.975 11:30:29 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:30.975 "subsystems": [ 00:14:30.975 { 00:14:30.975 "subsystem": "fsdev", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "fsdev_set_opts", 00:14:30.975 "params": { 00:14:30.975 "fsdev_io_pool_size": 65535, 00:14:30.975 "fsdev_io_cache_size": 256 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "keyring", 00:14:30.975 "config": [] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "iobuf", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "iobuf_set_options", 00:14:30.975 "params": { 00:14:30.975 "small_pool_count": 8192, 00:14:30.975 "large_pool_count": 1024, 00:14:30.975 "small_bufsize": 8192, 00:14:30.975 "large_bufsize": 135168, 00:14:30.975 "enable_numa": false 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "sock", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "sock_set_default_impl", 00:14:30.975 "params": { 00:14:30.975 "impl_name": "posix" 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "sock_impl_set_options", 00:14:30.975 "params": { 00:14:30.975 "impl_name": "ssl", 00:14:30.975 "recv_buf_size": 4096, 00:14:30.975 "send_buf_size": 4096, 00:14:30.975 "enable_recv_pipe": true, 00:14:30.975 "enable_quickack": false, 00:14:30.975 "enable_placement_id": 0, 00:14:30.975 "enable_zerocopy_send_server": true, 00:14:30.975 "enable_zerocopy_send_client": false, 00:14:30.975 "zerocopy_threshold": 0, 00:14:30.975 "tls_version": 0, 00:14:30.975 "enable_ktls": false 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "sock_impl_set_options", 00:14:30.975 "params": { 00:14:30.975 "impl_name": "posix", 00:14:30.975 "recv_buf_size": 2097152, 00:14:30.975 "send_buf_size": 2097152, 00:14:30.975 "enable_recv_pipe": true, 00:14:30.975 "enable_quickack": false, 00:14:30.975 "enable_placement_id": 0, 00:14:30.975 "enable_zerocopy_send_server": true, 00:14:30.975 "enable_zerocopy_send_client": false, 00:14:30.975 "zerocopy_threshold": 0, 00:14:30.975 "tls_version": 0, 00:14:30.975 "enable_ktls": false 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "vmd", 00:14:30.975 "config": [] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "accel", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "accel_set_options", 00:14:30.975 "params": { 00:14:30.975 "small_cache_size": 128, 00:14:30.975 "large_cache_size": 16, 00:14:30.975 "task_count": 2048, 00:14:30.975 "sequence_count": 2048, 00:14:30.975 "buf_count": 2048 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "bdev", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "bdev_set_options", 00:14:30.975 "params": { 00:14:30.975 "bdev_io_pool_size": 65535, 00:14:30.975 "bdev_io_cache_size": 256, 00:14:30.975 "bdev_auto_examine": true, 00:14:30.975 "iobuf_small_cache_size": 128, 00:14:30.975 "iobuf_large_cache_size": 16 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_raid_set_options", 00:14:30.975 "params": { 00:14:30.975 "process_window_size_kb": 1024, 00:14:30.975 "process_max_bandwidth_mb_sec": 0 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_iscsi_set_options", 00:14:30.975 "params": { 00:14:30.975 "timeout_sec": 30 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_nvme_set_options", 00:14:30.975 "params": { 00:14:30.975 "action_on_timeout": "none", 00:14:30.975 "timeout_us": 0, 00:14:30.975 "timeout_admin_us": 0, 00:14:30.975 "keep_alive_timeout_ms": 10000, 00:14:30.975 "arbitration_burst": 0, 00:14:30.975 "low_priority_weight": 0, 00:14:30.975 "medium_priority_weight": 0, 00:14:30.975 "high_priority_weight": 0, 00:14:30.975 "nvme_adminq_poll_period_us": 10000, 00:14:30.975 "nvme_ioq_poll_period_us": 0, 00:14:30.975 "io_queue_requests": 0, 00:14:30.975 "delay_cmd_submit": true, 00:14:30.975 "transport_retry_count": 4, 00:14:30.975 "bdev_retry_count": 3, 00:14:30.975 "transport_ack_timeout": 0, 00:14:30.975 "ctrlr_loss_timeout_sec": 0, 00:14:30.975 "reconnect_delay_sec": 0, 00:14:30.975 "fast_io_fail_timeout_sec": 0, 00:14:30.975 "disable_auto_failback": false, 00:14:30.975 "generate_uuids": false, 00:14:30.975 "transport_tos": 0, 00:14:30.975 "nvme_error_stat": false, 00:14:30.975 "rdma_srq_size": 0, 00:14:30.975 "io_path_stat": false, 00:14:30.975 "allow_accel_sequence": false, 00:14:30.975 "rdma_max_cq_size": 0, 00:14:30.975 "rdma_cm_event_timeout_ms": 0, 00:14:30.975 "dhchap_digests": [ 00:14:30.975 "sha256", 00:14:30.975 "sha384", 00:14:30.975 "sha512" 00:14:30.975 ], 00:14:30.975 "dhchap_dhgroups": [ 00:14:30.975 "null", 00:14:30.975 "ffdhe2048", 00:14:30.975 "ffdhe3072", 00:14:30.975 "ffdhe4096", 00:14:30.975 "ffdhe6144", 00:14:30.975 "ffdhe8192" 00:14:30.975 ] 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_nvme_set_hotplug", 00:14:30.975 "params": { 00:14:30.975 "period_us": 100000, 00:14:30.975 "enable": false 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_malloc_create", 00:14:30.975 "params": { 00:14:30.975 "name": "malloc0", 00:14:30.975 "num_blocks": 8192, 00:14:30.975 "block_size": 4096, 00:14:30.975 "physical_block_size": 4096, 00:14:30.975 "uuid": "29e9a58f-eb67-425d-9025-cb4c2df29bb7", 00:14:30.975 "optimal_io_boundary": 0, 00:14:30.975 "md_size": 0, 00:14:30.975 "dif_type": 0, 00:14:30.975 "dif_is_head_of_md": false, 00:14:30.975 "dif_pi_format": 0 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "bdev_wait_for_examine" 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "scsi", 00:14:30.975 "config": null 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "scheduler", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "framework_set_scheduler", 00:14:30.975 "params": { 00:14:30.975 "name": "static" 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "vhost_scsi", 00:14:30.975 "config": [] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "vhost_blk", 00:14:30.975 "config": [] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "ublk", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "ublk_create_target", 00:14:30.975 "params": { 00:14:30.975 "cpumask": "1" 00:14:30.975 } 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "method": "ublk_start_disk", 00:14:30.975 "params": { 00:14:30.975 "bdev_name": "malloc0", 00:14:30.975 "ublk_id": 0, 00:14:30.975 "num_queues": 1, 00:14:30.975 "queue_depth": 128 00:14:30.975 } 00:14:30.975 } 00:14:30.975 ] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "nbd", 00:14:30.975 "config": [] 00:14:30.975 }, 00:14:30.975 { 00:14:30.975 "subsystem": "nvmf", 00:14:30.975 "config": [ 00:14:30.975 { 00:14:30.975 "method": "nvmf_set_config", 00:14:30.975 "params": { 00:14:30.975 "discovery_filter": "match_any", 00:14:30.975 "admin_cmd_passthru": { 00:14:30.975 "identify_ctrlr": false 00:14:30.975 }, 00:14:30.975 "dhchap_digests": [ 00:14:30.975 "sha256", 00:14:30.975 "sha384", 00:14:30.975 "sha512" 00:14:30.975 ], 00:14:30.976 "dhchap_dhgroups": [ 00:14:30.976 "null", 00:14:30.976 "ffdhe2048", 00:14:30.976 "ffdhe3072", 00:14:30.976 "ffdhe4096", 00:14:30.976 "ffdhe6144", 00:14:30.976 "ffdhe8192" 00:14:30.976 ] 00:14:30.976 } 00:14:30.976 }, 00:14:30.976 { 00:14:30.976 "method": "nvmf_set_max_subsystems", 00:14:30.976 "params": { 00:14:30.976 "max_subsystems": 1024 00:14:30.976 } 00:14:30.976 }, 00:14:30.976 { 00:14:30.976 "method": "nvmf_set_crdt", 00:14:30.976 "params": { 00:14:30.976 "crdt1": 0, 00:14:30.976 "crdt2": 0, 00:14:30.976 "crdt3": 0 00:14:30.976 } 00:14:30.976 } 00:14:30.976 ] 00:14:30.976 }, 00:14:30.976 { 00:14:30.976 "subsystem": "iscsi", 00:14:30.976 "config": [ 00:14:30.976 { 00:14:30.976 "method": "iscsi_set_options", 00:14:30.976 "params": { 00:14:30.976 "node_base": "iqn.2016-06.io.spdk", 00:14:30.976 "max_sessions": 128, 00:14:30.976 "max_connections_per_session": 2, 00:14:30.976 "max_queue_depth": 64, 00:14:30.976 "default_time2wait": 2, 00:14:30.976 "default_time2retain": 20, 00:14:30.976 "first_burst_length": 8192, 00:14:30.976 "immediate_data": true, 00:14:30.976 "allow_duplicated_isid": false, 00:14:30.976 "error_recovery_level": 0, 00:14:30.976 "nop_timeout": 60, 00:14:30.976 "nop_in_interval": 30, 00:14:30.976 "disable_chap": false, 00:14:30.976 "require_chap": false, 00:14:30.976 "mutual_chap": false, 00:14:30.976 "chap_group": 0, 00:14:30.976 "max_large_datain_per_connection": 64, 00:14:30.976 "max_r2t_per_connection": 4, 00:14:30.976 "pdu_pool_size": 36864, 00:14:30.976 "immediate_data_pool_size": 16384, 00:14:30.976 "data_out_pool_size": 2048 00:14:30.976 } 00:14:30.976 } 00:14:30.976 ] 00:14:30.976 } 00:14:30.976 ] 00:14:30.976 }' 00:14:30.976 [2024-11-05 11:30:29.978121] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:30.976 [2024-11-05 11:30:29.978233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70738 ] 00:14:30.976 [2024-11-05 11:30:30.133109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:30.976 [2024-11-05 11:30:30.217150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:31.919 [2024-11-05 11:30:30.854817] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:31.919 [2024-11-05 11:30:30.855453] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:31.920 [2024-11-05 11:30:30.862907] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:31.920 [2024-11-05 11:30:30.862966] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:31.920 [2024-11-05 11:30:30.862973] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:31.920 [2024-11-05 11:30:30.862978] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:31.920 [2024-11-05 11:30:30.871867] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:31.920 [2024-11-05 11:30:30.871887] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:31.920 [2024-11-05 11:30:30.878824] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:31.920 [2024-11-05 11:30:30.878897] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:31.920 [2024-11-05 11:30:30.895819] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 70738 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 70738 ']' 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 70738 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70738 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:31.920 killing process with pid 70738 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70738' 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 70738 00:14:31.920 11:30:30 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 70738 00:14:32.862 [2024-11-05 11:30:31.981345] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:32.862 [2024-11-05 11:30:32.022871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:32.862 [2024-11-05 11:30:32.022993] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:32.862 [2024-11-05 11:30:32.027823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:32.862 [2024-11-05 11:30:32.027863] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:32.862 [2024-11-05 11:30:32.027869] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:32.862 [2024-11-05 11:30:32.027890] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:32.862 [2024-11-05 11:30:32.027996] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:34.234 11:30:33 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:34.234 ************************************ 00:14:34.234 END TEST test_save_ublk_config 00:14:34.234 ************************************ 00:14:34.234 00:14:34.234 real 0m7.369s 00:14:34.234 user 0m4.745s 00:14:34.234 sys 0m3.205s 00:14:34.234 11:30:33 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:34.234 11:30:33 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:34.234 11:30:33 ublk -- ublk/ublk.sh@139 -- # spdk_pid=70807 00:14:34.234 11:30:33 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:34.234 11:30:33 ublk -- ublk/ublk.sh@141 -- # waitforlisten 70807 00:14:34.234 11:30:33 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@833 -- # '[' -z 70807 ']' 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:34.234 11:30:33 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:34.234 [2024-11-05 11:30:33.294861] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:34.234 [2024-11-05 11:30:33.294952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70807 ] 00:14:34.234 [2024-11-05 11:30:33.444791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:34.492 [2024-11-05 11:30:33.521618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:34.492 [2024-11-05 11:30:33.521684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.058 11:30:34 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:35.058 11:30:34 ublk -- common/autotest_common.sh@866 -- # return 0 00:14:35.058 11:30:34 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:35.058 11:30:34 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:35.058 11:30:34 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:35.058 11:30:34 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.058 ************************************ 00:14:35.058 START TEST test_create_ublk 00:14:35.058 ************************************ 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:14:35.058 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.058 [2024-11-05 11:30:34.143820] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:35.058 [2024-11-05 11:30:34.145302] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.058 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:35.058 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.058 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:35.058 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.058 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.058 [2024-11-05 11:30:34.295932] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:35.058 [2024-11-05 11:30:34.296226] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:35.058 [2024-11-05 11:30:34.296239] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:35.058 [2024-11-05 11:30:34.296244] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:35.058 [2024-11-05 11:30:34.304987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:35.058 [2024-11-05 11:30:34.305005] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:35.058 [2024-11-05 11:30:34.311822] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:35.058 [2024-11-05 11:30:34.320866] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:35.058 [2024-11-05 11:30:34.329832] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:35.316 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:35.316 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.316 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:35.316 11:30:34 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:35.316 { 00:14:35.316 "ublk_device": "/dev/ublkb0", 00:14:35.316 "id": 0, 00:14:35.316 "queue_depth": 512, 00:14:35.316 "num_queues": 4, 00:14:35.316 "bdev_name": "Malloc0" 00:14:35.316 } 00:14:35.316 ]' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:35.316 11:30:34 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:35.574 fio: verification read phase will never start because write phase uses all of runtime 00:14:35.574 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:35.574 fio-3.35 00:14:35.574 Starting 1 process 00:14:45.542 00:14:45.542 fio_test: (groupid=0, jobs=1): err= 0: pid=70851: Tue Nov 5 11:30:44 2024 00:14:45.542 write: IOPS=19.9k, BW=77.8MiB/s (81.6MB/s)(779MiB/10001msec); 0 zone resets 00:14:45.542 clat (usec): min=32, max=4083, avg=49.35, stdev=79.13 00:14:45.542 lat (usec): min=32, max=4084, avg=49.81, stdev=79.15 00:14:45.542 clat percentiles (usec): 00:14:45.542 | 1.00th=[ 38], 5.00th=[ 39], 10.00th=[ 41], 20.00th=[ 43], 00:14:45.542 | 30.00th=[ 44], 40.00th=[ 45], 50.00th=[ 46], 60.00th=[ 47], 00:14:45.542 | 70.00th=[ 48], 80.00th=[ 50], 90.00th=[ 54], 95.00th=[ 61], 00:14:45.542 | 99.00th=[ 70], 99.50th=[ 79], 99.90th=[ 1172], 99.95th=[ 2245], 00:14:45.542 | 99.99th=[ 3458] 00:14:45.542 bw ( KiB/s): min=75224, max=82912, per=99.96%, avg=79686.42, stdev=2435.00, samples=19 00:14:45.542 iops : min=18806, max=20728, avg=19921.58, stdev=608.71, samples=19 00:14:45.542 lat (usec) : 50=82.32%, 100=17.35%, 250=0.15%, 500=0.04%, 750=0.01% 00:14:45.542 lat (usec) : 1000=0.01% 00:14:45.542 lat (msec) : 2=0.05%, 4=0.06%, 10=0.01% 00:14:45.542 cpu : usr=3.19%, sys=16.71%, ctx=199307, majf=0, minf=796 00:14:45.542 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:45.542 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.542 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:45.542 issued rwts: total=0,199312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:45.542 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:45.542 00:14:45.542 Run status group 0 (all jobs): 00:14:45.542 WRITE: bw=77.8MiB/s (81.6MB/s), 77.8MiB/s-77.8MiB/s (81.6MB/s-81.6MB/s), io=779MiB (816MB), run=10001-10001msec 00:14:45.542 00:14:45.543 Disk stats (read/write): 00:14:45.543 ublkb0: ios=0/197230, merge=0/0, ticks=0/8006, in_queue=8006, util=99.09% 00:14:45.543 11:30:44 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.543 [2024-11-05 11:30:44.745656] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:45.543 [2024-11-05 11:30:44.789265] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:45.543 [2024-11-05 11:30:44.790259] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:45.543 [2024-11-05 11:30:44.796831] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:45.543 [2024-11-05 11:30:44.797069] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:45.543 [2024-11-05 11:30:44.797087] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.543 11:30:44 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.543 [2024-11-05 11:30:44.811877] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:45.543 request: 00:14:45.543 { 00:14:45.543 "ublk_id": 0, 00:14:45.543 "method": "ublk_stop_disk", 00:14:45.543 "req_id": 1 00:14:45.543 } 00:14:45.543 Got JSON-RPC error response 00:14:45.543 response: 00:14:45.543 { 00:14:45.543 "code": -19, 00:14:45.543 "message": "No such device" 00:14:45.543 } 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.543 11:30:44 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.543 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:45.803 [2024-11-05 11:30:44.820882] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:45.803 [2024-11-05 11:30:44.824575] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:45.803 [2024-11-05 11:30:44.824608] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:45.803 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.803 11:30:44 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:45.803 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.803 11:30:44 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.065 11:30:45 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:46.065 11:30:45 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:46.065 00:14:46.065 real 0m11.145s 00:14:46.065 user 0m0.619s 00:14:46.065 sys 0m1.739s 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:46.065 11:30:45 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 ************************************ 00:14:46.065 END TEST test_create_ublk 00:14:46.065 ************************************ 00:14:46.065 11:30:45 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:46.065 11:30:45 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:46.065 11:30:45 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:46.065 11:30:45 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 ************************************ 00:14:46.065 START TEST test_create_multi_ublk 00:14:46.065 ************************************ 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.065 [2024-11-05 11:30:45.331815] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:46.065 [2024-11-05 11:30:45.333332] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:46.065 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.327 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.327 [2024-11-05 11:30:45.543922] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:46.327 [2024-11-05 11:30:45.544215] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:46.327 [2024-11-05 11:30:45.544227] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:46.327 [2024-11-05 11:30:45.544235] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:46.327 [2024-11-05 11:30:45.554865] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:46.327 [2024-11-05 11:30:45.554886] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:46.327 [2024-11-05 11:30:45.567826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:46.327 [2024-11-05 11:30:45.568316] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:46.327 [2024-11-05 11:30:45.594821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.588 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.588 [2024-11-05 11:30:45.821921] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:46.588 [2024-11-05 11:30:45.822208] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:46.588 [2024-11-05 11:30:45.822221] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:46.588 [2024-11-05 11:30:45.822226] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:46.588 [2024-11-05 11:30:45.829843] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:46.588 [2024-11-05 11:30:45.829860] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:46.589 [2024-11-05 11:30:45.837826] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:46.589 [2024-11-05 11:30:45.838328] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:46.589 [2024-11-05 11:30:45.854820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:46.589 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.589 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:46.589 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:46.849 11:30:45 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:46.849 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.849 11:30:45 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.849 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.849 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:46.849 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:46.849 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.849 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:46.849 [2024-11-05 11:30:46.013911] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:46.849 [2024-11-05 11:30:46.014205] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:46.849 [2024-11-05 11:30:46.014216] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:46.850 [2024-11-05 11:30:46.014223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:46.850 [2024-11-05 11:30:46.021829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:46.850 [2024-11-05 11:30:46.021850] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:46.850 [2024-11-05 11:30:46.029825] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:46.850 [2024-11-05 11:30:46.030337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:46.850 [2024-11-05 11:30:46.038837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.850 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.112 [2024-11-05 11:30:46.197913] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:47.112 [2024-11-05 11:30:46.198201] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:47.112 [2024-11-05 11:30:46.198214] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:47.112 [2024-11-05 11:30:46.198220] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:47.112 [2024-11-05 11:30:46.205841] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:47.112 [2024-11-05 11:30:46.205858] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:47.112 [2024-11-05 11:30:46.213821] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:47.112 [2024-11-05 11:30:46.214313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:47.112 [2024-11-05 11:30:46.218497] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:47.112 { 00:14:47.112 "ublk_device": "/dev/ublkb0", 00:14:47.112 "id": 0, 00:14:47.112 "queue_depth": 512, 00:14:47.112 "num_queues": 4, 00:14:47.112 "bdev_name": "Malloc0" 00:14:47.112 }, 00:14:47.112 { 00:14:47.112 "ublk_device": "/dev/ublkb1", 00:14:47.112 "id": 1, 00:14:47.112 "queue_depth": 512, 00:14:47.112 "num_queues": 4, 00:14:47.112 "bdev_name": "Malloc1" 00:14:47.112 }, 00:14:47.112 { 00:14:47.112 "ublk_device": "/dev/ublkb2", 00:14:47.112 "id": 2, 00:14:47.112 "queue_depth": 512, 00:14:47.112 "num_queues": 4, 00:14:47.112 "bdev_name": "Malloc2" 00:14:47.112 }, 00:14:47.112 { 00:14:47.112 "ublk_device": "/dev/ublkb3", 00:14:47.112 "id": 3, 00:14:47.112 "queue_depth": 512, 00:14:47.112 "num_queues": 4, 00:14:47.112 "bdev_name": "Malloc3" 00:14:47.112 } 00:14:47.112 ]' 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:47.112 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:47.372 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.634 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.634 [2024-11-05 11:30:46.865892] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:47.634 [2024-11-05 11:30:46.899266] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:47.634 [2024-11-05 11:30:46.900332] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:47.634 [2024-11-05 11:30:46.905830] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:47.634 [2024-11-05 11:30:46.906049] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:47.634 [2024-11-05 11:30:46.906062] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.895 [2024-11-05 11:30:46.921870] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:47.895 [2024-11-05 11:30:46.961241] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:47.895 [2024-11-05 11:30:46.962243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:47.895 [2024-11-05 11:30:46.966820] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:47.895 [2024-11-05 11:30:46.967048] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:47.895 [2024-11-05 11:30:46.967062] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:47.895 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.896 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.896 11:30:46 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:47.896 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.896 11:30:46 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 [2024-11-05 11:30:46.976899] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:47.896 [2024-11-05 11:30:47.016849] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:47.896 [2024-11-05 11:30:47.017460] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:47.896 [2024-11-05 11:30:47.024833] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:47.896 [2024-11-05 11:30:47.025046] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:47.896 [2024-11-05 11:30:47.025058] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:47.896 [2024-11-05 11:30:47.040879] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:47.896 [2024-11-05 11:30:47.081232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:47.896 [2024-11-05 11:30:47.082243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:47.896 [2024-11-05 11:30:47.088823] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:47.896 [2024-11-05 11:30:47.089028] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:47.896 [2024-11-05 11:30:47.089041] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.896 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:48.157 [2024-11-05 11:30:47.280864] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:48.157 [2024-11-05 11:30:47.284408] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:48.157 [2024-11-05 11:30:47.284435] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:48.157 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:48.157 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:48.157 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:48.157 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.157 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:48.418 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.418 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:48.418 11:30:47 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:48.418 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.418 11:30:47 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:48.989 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.990 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:49.251 00:14:49.251 real 0m3.163s 00:14:49.251 user 0m0.809s 00:14:49.251 sys 0m0.134s 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:49.251 11:30:48 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:49.251 ************************************ 00:14:49.251 END TEST test_create_multi_ublk 00:14:49.251 ************************************ 00:14:49.251 11:30:48 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:49.251 11:30:48 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:49.251 11:30:48 ublk -- ublk/ublk.sh@130 -- # killprocess 70807 00:14:49.251 11:30:48 ublk -- common/autotest_common.sh@952 -- # '[' -z 70807 ']' 00:14:49.251 11:30:48 ublk -- common/autotest_common.sh@956 -- # kill -0 70807 00:14:49.251 11:30:48 ublk -- common/autotest_common.sh@957 -- # uname 00:14:49.252 11:30:48 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:49.252 11:30:48 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70807 00:14:49.511 11:30:48 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:49.511 11:30:48 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:49.511 killing process with pid 70807 00:14:49.511 11:30:48 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70807' 00:14:49.511 11:30:48 ublk -- common/autotest_common.sh@971 -- # kill 70807 00:14:49.511 11:30:48 ublk -- common/autotest_common.sh@976 -- # wait 70807 00:14:50.078 [2024-11-05 11:30:49.059511] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:50.078 [2024-11-05 11:30:49.059554] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:50.644 00:14:50.644 real 0m24.041s 00:14:50.644 user 0m34.357s 00:14:50.644 sys 0m9.771s 00:14:50.644 11:30:49 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:50.644 11:30:49 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:50.644 ************************************ 00:14:50.644 END TEST ublk 00:14:50.644 ************************************ 00:14:50.644 11:30:49 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:50.644 11:30:49 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:50.644 11:30:49 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:50.644 11:30:49 -- common/autotest_common.sh@10 -- # set +x 00:14:50.644 ************************************ 00:14:50.644 START TEST ublk_recovery 00:14:50.644 ************************************ 00:14:50.644 11:30:49 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:50.644 * Looking for test storage... 00:14:50.644 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:50.644 11:30:49 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:50.644 11:30:49 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:50.644 11:30:49 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:50.644 11:30:49 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:50.644 11:30:49 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:50.644 11:30:49 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:50.644 11:30:49 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:50.645 11:30:49 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:50.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.645 --rc genhtml_branch_coverage=1 00:14:50.645 --rc genhtml_function_coverage=1 00:14:50.645 --rc genhtml_legend=1 00:14:50.645 --rc geninfo_all_blocks=1 00:14:50.645 --rc geninfo_unexecuted_blocks=1 00:14:50.645 00:14:50.645 ' 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:50.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.645 --rc genhtml_branch_coverage=1 00:14:50.645 --rc genhtml_function_coverage=1 00:14:50.645 --rc genhtml_legend=1 00:14:50.645 --rc geninfo_all_blocks=1 00:14:50.645 --rc geninfo_unexecuted_blocks=1 00:14:50.645 00:14:50.645 ' 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:50.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.645 --rc genhtml_branch_coverage=1 00:14:50.645 --rc genhtml_function_coverage=1 00:14:50.645 --rc genhtml_legend=1 00:14:50.645 --rc geninfo_all_blocks=1 00:14:50.645 --rc geninfo_unexecuted_blocks=1 00:14:50.645 00:14:50.645 ' 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:50.645 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:50.645 --rc genhtml_branch_coverage=1 00:14:50.645 --rc genhtml_function_coverage=1 00:14:50.645 --rc genhtml_legend=1 00:14:50.645 --rc geninfo_all_blocks=1 00:14:50.645 --rc geninfo_unexecuted_blocks=1 00:14:50.645 00:14:50.645 ' 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:50.645 11:30:49 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71199 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71199 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71199 ']' 00:14:50.645 11:30:49 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:50.645 11:30:49 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:50.903 [2024-11-05 11:30:49.940437] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:14:50.903 [2024-11-05 11:30:49.940526] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71199 ] 00:14:50.903 [2024-11-05 11:30:50.085650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:50.903 [2024-11-05 11:30:50.164527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.903 [2024-11-05 11:30:50.164599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:14:51.470 11:30:50 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.470 [2024-11-05 11:30:50.739823] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:51.470 [2024-11-05 11:30:50.741315] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.470 11:30:50 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.470 11:30:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.728 malloc0 00:14:51.728 11:30:50 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.728 11:30:50 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:51.728 11:30:50 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.728 11:30:50 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:51.728 [2024-11-05 11:30:50.820013] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:51.728 [2024-11-05 11:30:50.820094] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:51.728 [2024-11-05 11:30:50.820103] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:51.728 [2024-11-05 11:30:50.820110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:51.728 [2024-11-05 11:30:50.828889] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:51.728 [2024-11-05 11:30:50.828908] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:51.728 [2024-11-05 11:30:50.835829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:51.728 [2024-11-05 11:30:50.835937] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:51.728 [2024-11-05 11:30:50.840127] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:51.728 1 00:14:51.728 11:30:50 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.728 11:30:50 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:52.662 11:30:51 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71234 00:14:52.662 11:30:51 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:52.663 11:30:51 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:52.936 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:52.936 fio-3.35 00:14:52.936 Starting 1 process 00:14:58.210 11:30:56 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71199 00:14:58.210 11:30:56 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:15:03.492 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71199 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:15:03.492 11:31:01 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=71338 00:15:03.492 11:31:01 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:03.492 11:31:01 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:03.492 11:31:01 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 71338 00:15:03.492 11:31:01 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 71338 ']' 00:15:03.492 11:31:01 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.493 11:31:01 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:03.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.493 11:31:01 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.493 11:31:01 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:03.493 11:31:01 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.493 [2024-11-05 11:31:01.942727] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:15:03.493 [2024-11-05 11:31:01.942854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71338 ] 00:15:03.493 [2024-11-05 11:31:02.099996] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:03.493 [2024-11-05 11:31:02.213538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.493 [2024-11-05 11:31:02.213652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:15:03.754 11:31:02 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.754 [2024-11-05 11:31:02.900829] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:03.754 [2024-11-05 11:31:02.903075] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.754 11:31:02 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.754 11:31:02 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.754 malloc0 00:15:03.754 11:31:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.754 11:31:03 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:15:03.754 11:31:03 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.754 11:31:03 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:03.754 [2024-11-05 11:31:03.021002] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:15:03.754 [2024-11-05 11:31:03.021061] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:03.754 [2024-11-05 11:31:03.021071] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:15:03.754 [2024-11-05 11:31:03.028871] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:15:03.754 [2024-11-05 11:31:03.028909] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:15:03.754 [2024-11-05 11:31:03.028919] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:15:03.754 [2024-11-05 11:31:03.029011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:15:03.754 1 00:15:04.013 11:31:03 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.013 11:31:03 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71234 00:15:04.013 [2024-11-05 11:31:03.036837] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:15:04.013 [2024-11-05 11:31:03.044590] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:15:04.013 [2024-11-05 11:31:03.052088] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:15:04.013 [2024-11-05 11:31:03.052125] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:16:00.273 00:16:00.273 fio_test: (groupid=0, jobs=1): err= 0: pid=71237: Tue Nov 5 11:31:52 2024 00:16:00.273 read: IOPS=28.5k, BW=111MiB/s (117MB/s)(6683MiB/60002msec) 00:16:00.273 slat (nsec): min=1092, max=672063, avg=4832.85, stdev=2186.80 00:16:00.273 clat (usec): min=747, max=6203.6k, avg=2203.76, stdev=37926.37 00:16:00.273 lat (usec): min=847, max=6203.6k, avg=2208.60, stdev=37926.36 00:16:00.273 clat percentiles (usec): 00:16:00.273 | 1.00th=[ 1631], 5.00th=[ 1762], 10.00th=[ 1778], 20.00th=[ 1811], 00:16:00.273 | 30.00th=[ 1827], 40.00th=[ 1844], 50.00th=[ 1844], 60.00th=[ 1860], 00:16:00.273 | 70.00th=[ 1876], 80.00th=[ 1909], 90.00th=[ 1975], 95.00th=[ 2802], 00:16:00.273 | 99.00th=[ 4686], 99.50th=[ 5211], 99.90th=[ 6390], 99.95th=[ 7504], 00:16:00.273 | 99.99th=[13042] 00:16:00.273 bw ( KiB/s): min= 1000, max=131408, per=100.00%, avg=125630.69, stdev=16840.91, samples=108 00:16:00.273 iops : min= 250, max=32852, avg=31407.67, stdev=4210.22, samples=108 00:16:00.273 write: IOPS=28.5k, BW=111MiB/s (117MB/s)(6678MiB/60002msec); 0 zone resets 00:16:00.273 slat (nsec): min=1121, max=876286, avg=4861.96, stdev=2309.46 00:16:00.273 clat (usec): min=857, max=6203.7k, avg=2276.38, stdev=37942.09 00:16:00.273 lat (usec): min=868, max=6203.7k, avg=2281.24, stdev=37942.09 00:16:00.273 clat percentiles (usec): 00:16:00.273 | 1.00th=[ 1663], 5.00th=[ 1827], 10.00th=[ 1860], 20.00th=[ 1893], 00:16:00.273 | 30.00th=[ 1909], 40.00th=[ 1926], 50.00th=[ 1942], 60.00th=[ 1958], 00:16:00.273 | 70.00th=[ 1975], 80.00th=[ 1991], 90.00th=[ 2040], 95.00th=[ 2704], 00:16:00.273 | 99.00th=[ 4686], 99.50th=[ 5211], 99.90th=[ 6456], 99.95th=[ 7439], 00:16:00.273 | 99.99th=[13173] 00:16:00.273 bw ( KiB/s): min= 1056, max=132160, per=100.00%, avg=125528.53, stdev=16920.69, samples=108 00:16:00.273 iops : min= 264, max=33040, avg=31382.13, stdev=4230.17, samples=108 00:16:00.273 lat (usec) : 750=0.01%, 1000=0.01% 00:16:00.273 lat (msec) : 2=87.49%, 4=10.21%, 10=2.28%, 20=0.01%, >=2000=0.01% 00:16:00.273 cpu : usr=6.34%, sys=28.29%, ctx=117039, majf=0, minf=13 00:16:00.273 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:16:00.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:00.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:00.273 issued rwts: total=1710878,1709549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:00.273 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:00.273 00:16:00.273 Run status group 0 (all jobs): 00:16:00.273 READ: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=6683MiB (7008MB), run=60002-60002msec 00:16:00.273 WRITE: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=6678MiB (7002MB), run=60002-60002msec 00:16:00.273 00:16:00.273 Disk stats (read/write): 00:16:00.273 ublkb1: ios=1707438/1706056, merge=0/0, ticks=3676075/3662421, in_queue=7338496, util=99.90% 00:16:00.273 11:31:52 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.273 [2024-11-05 11:31:52.109951] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:00.273 [2024-11-05 11:31:52.140913] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:00.273 [2024-11-05 11:31:52.141042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:00.273 [2024-11-05 11:31:52.147829] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:00.273 [2024-11-05 11:31:52.147912] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:00.273 [2024-11-05 11:31:52.147920] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.273 11:31:52 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.273 [2024-11-05 11:31:52.163890] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:00.273 [2024-11-05 11:31:52.167376] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:00.273 [2024-11-05 11:31:52.167406] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:00.273 11:31:52 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:16:00.273 11:31:52 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:16:00.273 11:31:52 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 71338 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 71338 ']' 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 71338 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:16:00.273 11:31:52 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71338 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71338' 00:16:00.274 killing process with pid 71338 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@971 -- # kill 71338 00:16:00.274 11:31:52 ublk_recovery -- common/autotest_common.sh@976 -- # wait 71338 00:16:00.274 [2024-11-05 11:31:53.308697] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:00.274 [2024-11-05 11:31:53.308739] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:00.274 ************************************ 00:16:00.274 END TEST ublk_recovery 00:16:00.274 ************************************ 00:16:00.274 00:16:00.274 real 1m4.458s 00:16:00.274 user 1m43.227s 00:16:00.274 sys 0m35.791s 00:16:00.274 11:31:54 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:00.274 11:31:54 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:00.274 11:31:54 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@256 -- # timing_exit lib 00:16:00.274 11:31:54 -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:00.274 11:31:54 -- common/autotest_common.sh@10 -- # set +x 00:16:00.274 11:31:54 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:16:00.274 11:31:54 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:00.274 11:31:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:16:00.274 11:31:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:00.274 11:31:54 -- common/autotest_common.sh@10 -- # set +x 00:16:00.274 ************************************ 00:16:00.274 START TEST ftl 00:16:00.274 ************************************ 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:00.274 * Looking for test storage... 00:16:00.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.274 11:31:54 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.274 11:31:54 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.274 11:31:54 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.274 11:31:54 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.274 11:31:54 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.274 11:31:54 ftl -- scripts/common.sh@344 -- # case "$op" in 00:16:00.274 11:31:54 ftl -- scripts/common.sh@345 -- # : 1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.274 11:31:54 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.274 11:31:54 ftl -- scripts/common.sh@365 -- # decimal 1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@353 -- # local d=1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.274 11:31:54 ftl -- scripts/common.sh@355 -- # echo 1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.274 11:31:54 ftl -- scripts/common.sh@366 -- # decimal 2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@353 -- # local d=2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.274 11:31:54 ftl -- scripts/common.sh@355 -- # echo 2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.274 11:31:54 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.274 11:31:54 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.274 11:31:54 ftl -- scripts/common.sh@368 -- # return 0 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:00.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.274 --rc genhtml_branch_coverage=1 00:16:00.274 --rc genhtml_function_coverage=1 00:16:00.274 --rc genhtml_legend=1 00:16:00.274 --rc geninfo_all_blocks=1 00:16:00.274 --rc geninfo_unexecuted_blocks=1 00:16:00.274 00:16:00.274 ' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:00.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.274 --rc genhtml_branch_coverage=1 00:16:00.274 --rc genhtml_function_coverage=1 00:16:00.274 --rc genhtml_legend=1 00:16:00.274 --rc geninfo_all_blocks=1 00:16:00.274 --rc geninfo_unexecuted_blocks=1 00:16:00.274 00:16:00.274 ' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:00.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.274 --rc genhtml_branch_coverage=1 00:16:00.274 --rc genhtml_function_coverage=1 00:16:00.274 --rc genhtml_legend=1 00:16:00.274 --rc geninfo_all_blocks=1 00:16:00.274 --rc geninfo_unexecuted_blocks=1 00:16:00.274 00:16:00.274 ' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:00.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.274 --rc genhtml_branch_coverage=1 00:16:00.274 --rc genhtml_function_coverage=1 00:16:00.274 --rc genhtml_legend=1 00:16:00.274 --rc geninfo_all_blocks=1 00:16:00.274 --rc geninfo_unexecuted_blocks=1 00:16:00.274 00:16:00.274 ' 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:00.274 11:31:54 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:16:00.274 11:31:54 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.274 11:31:54 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.274 11:31:54 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:00.274 11:31:54 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:00.274 11:31:54 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.274 11:31:54 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.274 11:31:54 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.274 11:31:54 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:00.274 11:31:54 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:00.274 11:31:54 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:00.274 11:31:54 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:00.274 11:31:54 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.274 11:31:54 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.274 11:31:54 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:00.274 11:31:54 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:00.274 11:31:54 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:00.274 11:31:54 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:00.274 11:31:54 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:00.274 11:31:54 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:00.274 11:31:54 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:00.274 11:31:54 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.274 11:31:54 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:00.274 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:00.274 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:00.274 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:00.274 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:00.274 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72148 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72148 00:16:00.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@833 -- # '[' -z 72148 ']' 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.274 11:31:54 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:00.274 11:31:54 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.275 11:31:54 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:00.275 11:31:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:00.275 [2024-11-05 11:31:54.941480] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:00.275 [2024-11-05 11:31:54.941599] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72148 ] 00:16:00.275 [2024-11-05 11:31:55.100790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:00.275 [2024-11-05 11:31:55.196204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:00.275 11:31:55 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:00.275 11:31:55 ftl -- common/autotest_common.sh@866 -- # return 0 00:16:00.275 11:31:55 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:16:00.275 11:31:55 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:00.275 11:31:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:16:00.275 11:31:56 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@50 -- # break 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@63 -- # break 00:16:00.275 11:31:57 ftl -- ftl/ftl.sh@66 -- # killprocess 72148 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@952 -- # '[' -z 72148 ']' 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@956 -- # kill -0 72148 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@957 -- # uname 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72148 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:00.275 killing process with pid 72148 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72148' 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@971 -- # kill 72148 00:16:00.275 11:31:57 ftl -- common/autotest_common.sh@976 -- # wait 72148 00:16:00.275 11:31:58 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:16:00.275 11:31:58 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:00.275 11:31:58 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:16:00.275 11:31:58 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:00.275 11:31:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:00.275 ************************************ 00:16:00.275 START TEST ftl_fio_basic 00:16:00.275 ************************************ 00:16:00.275 11:31:58 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:16:00.275 * Looking for test storage... 00:16:00.275 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.275 --rc genhtml_branch_coverage=1 00:16:00.275 --rc genhtml_function_coverage=1 00:16:00.275 --rc genhtml_legend=1 00:16:00.275 --rc geninfo_all_blocks=1 00:16:00.275 --rc geninfo_unexecuted_blocks=1 00:16:00.275 00:16:00.275 ' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.275 --rc genhtml_branch_coverage=1 00:16:00.275 --rc genhtml_function_coverage=1 00:16:00.275 --rc genhtml_legend=1 00:16:00.275 --rc geninfo_all_blocks=1 00:16:00.275 --rc geninfo_unexecuted_blocks=1 00:16:00.275 00:16:00.275 ' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.275 --rc genhtml_branch_coverage=1 00:16:00.275 --rc genhtml_function_coverage=1 00:16:00.275 --rc genhtml_legend=1 00:16:00.275 --rc geninfo_all_blocks=1 00:16:00.275 --rc geninfo_unexecuted_blocks=1 00:16:00.275 00:16:00.275 ' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:00.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:00.275 --rc genhtml_branch_coverage=1 00:16:00.275 --rc genhtml_function_coverage=1 00:16:00.275 --rc genhtml_legend=1 00:16:00.275 --rc geninfo_all_blocks=1 00:16:00.275 --rc geninfo_unexecuted_blocks=1 00:16:00.275 00:16:00.275 ' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:00.275 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72286 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72286 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 72286 ']' 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:00.276 11:31:59 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:16:00.276 [2024-11-05 11:31:59.176793] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:16:00.276 [2024-11-05 11:31:59.177467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72286 ] 00:16:00.276 [2024-11-05 11:31:59.340867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:00.276 [2024-11-05 11:31:59.463280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:00.276 [2024-11-05 11:31:59.463579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:00.276 [2024-11-05 11:31:59.463675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:01.218 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:01.480 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:01.480 { 00:16:01.480 "name": "nvme0n1", 00:16:01.480 "aliases": [ 00:16:01.480 "37eca212-accf-42fa-abde-76fc982b5055" 00:16:01.480 ], 00:16:01.480 "product_name": "NVMe disk", 00:16:01.480 "block_size": 4096, 00:16:01.480 "num_blocks": 1310720, 00:16:01.480 "uuid": "37eca212-accf-42fa-abde-76fc982b5055", 00:16:01.480 "numa_id": -1, 00:16:01.480 "assigned_rate_limits": { 00:16:01.480 "rw_ios_per_sec": 0, 00:16:01.480 "rw_mbytes_per_sec": 0, 00:16:01.480 "r_mbytes_per_sec": 0, 00:16:01.480 "w_mbytes_per_sec": 0 00:16:01.480 }, 00:16:01.480 "claimed": false, 00:16:01.480 "zoned": false, 00:16:01.480 "supported_io_types": { 00:16:01.480 "read": true, 00:16:01.480 "write": true, 00:16:01.480 "unmap": true, 00:16:01.480 "flush": true, 00:16:01.480 "reset": true, 00:16:01.481 "nvme_admin": true, 00:16:01.481 "nvme_io": true, 00:16:01.481 "nvme_io_md": false, 00:16:01.481 "write_zeroes": true, 00:16:01.481 "zcopy": false, 00:16:01.481 "get_zone_info": false, 00:16:01.481 "zone_management": false, 00:16:01.481 "zone_append": false, 00:16:01.481 "compare": true, 00:16:01.481 "compare_and_write": false, 00:16:01.481 "abort": true, 00:16:01.481 "seek_hole": false, 00:16:01.481 "seek_data": false, 00:16:01.481 "copy": true, 00:16:01.481 "nvme_iov_md": false 00:16:01.481 }, 00:16:01.481 "driver_specific": { 00:16:01.481 "nvme": [ 00:16:01.481 { 00:16:01.481 "pci_address": "0000:00:11.0", 00:16:01.481 "trid": { 00:16:01.481 "trtype": "PCIe", 00:16:01.481 "traddr": "0000:00:11.0" 00:16:01.481 }, 00:16:01.481 "ctrlr_data": { 00:16:01.481 "cntlid": 0, 00:16:01.481 "vendor_id": "0x1b36", 00:16:01.481 "model_number": "QEMU NVMe Ctrl", 00:16:01.481 "serial_number": "12341", 00:16:01.481 "firmware_revision": "8.0.0", 00:16:01.481 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:01.481 "oacs": { 00:16:01.481 "security": 0, 00:16:01.481 "format": 1, 00:16:01.481 "firmware": 0, 00:16:01.481 "ns_manage": 1 00:16:01.481 }, 00:16:01.481 "multi_ctrlr": false, 00:16:01.481 "ana_reporting": false 00:16:01.481 }, 00:16:01.481 "vs": { 00:16:01.481 "nvme_version": "1.4" 00:16:01.481 }, 00:16:01.481 "ns_data": { 00:16:01.481 "id": 1, 00:16:01.481 "can_share": false 00:16:01.481 } 00:16:01.481 } 00:16:01.481 ], 00:16:01.481 "mp_policy": "active_passive" 00:16:01.481 } 00:16:01.481 } 00:16:01.481 ]' 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:01.481 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:01.742 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:16:01.742 11:32:00 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:02.003 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=5e40073e-0166-469f-838c-bb93d057c983 00:16:02.003 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5e40073e-0166-469f-838c-bb93d057c983 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:02.264 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:02.524 { 00:16:02.524 "name": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:02.524 "aliases": [ 00:16:02.524 "lvs/nvme0n1p0" 00:16:02.524 ], 00:16:02.524 "product_name": "Logical Volume", 00:16:02.524 "block_size": 4096, 00:16:02.524 "num_blocks": 26476544, 00:16:02.524 "uuid": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:02.524 "assigned_rate_limits": { 00:16:02.524 "rw_ios_per_sec": 0, 00:16:02.524 "rw_mbytes_per_sec": 0, 00:16:02.524 "r_mbytes_per_sec": 0, 00:16:02.524 "w_mbytes_per_sec": 0 00:16:02.524 }, 00:16:02.524 "claimed": false, 00:16:02.524 "zoned": false, 00:16:02.524 "supported_io_types": { 00:16:02.524 "read": true, 00:16:02.524 "write": true, 00:16:02.524 "unmap": true, 00:16:02.524 "flush": false, 00:16:02.524 "reset": true, 00:16:02.524 "nvme_admin": false, 00:16:02.524 "nvme_io": false, 00:16:02.524 "nvme_io_md": false, 00:16:02.524 "write_zeroes": true, 00:16:02.524 "zcopy": false, 00:16:02.524 "get_zone_info": false, 00:16:02.524 "zone_management": false, 00:16:02.524 "zone_append": false, 00:16:02.524 "compare": false, 00:16:02.524 "compare_and_write": false, 00:16:02.524 "abort": false, 00:16:02.524 "seek_hole": true, 00:16:02.524 "seek_data": true, 00:16:02.524 "copy": false, 00:16:02.524 "nvme_iov_md": false 00:16:02.524 }, 00:16:02.524 "driver_specific": { 00:16:02.524 "lvol": { 00:16:02.524 "lvol_store_uuid": "5e40073e-0166-469f-838c-bb93d057c983", 00:16:02.524 "base_bdev": "nvme0n1", 00:16:02.524 "thin_provision": true, 00:16:02.524 "num_allocated_clusters": 0, 00:16:02.524 "snapshot": false, 00:16:02.524 "clone": false, 00:16:02.524 "esnap_clone": false 00:16:02.524 } 00:16:02.524 } 00:16:02.524 } 00:16:02.524 ]' 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:16:02.524 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:02.785 11:32:01 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:03.045 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:03.045 { 00:16:03.045 "name": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:03.045 "aliases": [ 00:16:03.045 "lvs/nvme0n1p0" 00:16:03.045 ], 00:16:03.045 "product_name": "Logical Volume", 00:16:03.045 "block_size": 4096, 00:16:03.045 "num_blocks": 26476544, 00:16:03.045 "uuid": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:03.045 "assigned_rate_limits": { 00:16:03.045 "rw_ios_per_sec": 0, 00:16:03.045 "rw_mbytes_per_sec": 0, 00:16:03.045 "r_mbytes_per_sec": 0, 00:16:03.045 "w_mbytes_per_sec": 0 00:16:03.045 }, 00:16:03.045 "claimed": false, 00:16:03.045 "zoned": false, 00:16:03.045 "supported_io_types": { 00:16:03.045 "read": true, 00:16:03.045 "write": true, 00:16:03.045 "unmap": true, 00:16:03.045 "flush": false, 00:16:03.045 "reset": true, 00:16:03.045 "nvme_admin": false, 00:16:03.045 "nvme_io": false, 00:16:03.045 "nvme_io_md": false, 00:16:03.045 "write_zeroes": true, 00:16:03.045 "zcopy": false, 00:16:03.045 "get_zone_info": false, 00:16:03.045 "zone_management": false, 00:16:03.045 "zone_append": false, 00:16:03.045 "compare": false, 00:16:03.045 "compare_and_write": false, 00:16:03.045 "abort": false, 00:16:03.045 "seek_hole": true, 00:16:03.045 "seek_data": true, 00:16:03.045 "copy": false, 00:16:03.045 "nvme_iov_md": false 00:16:03.045 }, 00:16:03.045 "driver_specific": { 00:16:03.045 "lvol": { 00:16:03.045 "lvol_store_uuid": "5e40073e-0166-469f-838c-bb93d057c983", 00:16:03.045 "base_bdev": "nvme0n1", 00:16:03.045 "thin_provision": true, 00:16:03.045 "num_allocated_clusters": 0, 00:16:03.045 "snapshot": false, 00:16:03.045 "clone": false, 00:16:03.045 "esnap_clone": false 00:16:03.046 } 00:16:03.046 } 00:16:03.046 } 00:16:03.046 ]' 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:16:03.046 11:32:02 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:03.306 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:16:03.306 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:16:03.306 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:16:03.306 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:16:03.306 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:03.306 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b b283d828-5007-4b2b-abbb-ca9c6bf724aa 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:16:03.307 { 00:16:03.307 "name": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:03.307 "aliases": [ 00:16:03.307 "lvs/nvme0n1p0" 00:16:03.307 ], 00:16:03.307 "product_name": "Logical Volume", 00:16:03.307 "block_size": 4096, 00:16:03.307 "num_blocks": 26476544, 00:16:03.307 "uuid": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:03.307 "assigned_rate_limits": { 00:16:03.307 "rw_ios_per_sec": 0, 00:16:03.307 "rw_mbytes_per_sec": 0, 00:16:03.307 "r_mbytes_per_sec": 0, 00:16:03.307 "w_mbytes_per_sec": 0 00:16:03.307 }, 00:16:03.307 "claimed": false, 00:16:03.307 "zoned": false, 00:16:03.307 "supported_io_types": { 00:16:03.307 "read": true, 00:16:03.307 "write": true, 00:16:03.307 "unmap": true, 00:16:03.307 "flush": false, 00:16:03.307 "reset": true, 00:16:03.307 "nvme_admin": false, 00:16:03.307 "nvme_io": false, 00:16:03.307 "nvme_io_md": false, 00:16:03.307 "write_zeroes": true, 00:16:03.307 "zcopy": false, 00:16:03.307 "get_zone_info": false, 00:16:03.307 "zone_management": false, 00:16:03.307 "zone_append": false, 00:16:03.307 "compare": false, 00:16:03.307 "compare_and_write": false, 00:16:03.307 "abort": false, 00:16:03.307 "seek_hole": true, 00:16:03.307 "seek_data": true, 00:16:03.307 "copy": false, 00:16:03.307 "nvme_iov_md": false 00:16:03.307 }, 00:16:03.307 "driver_specific": { 00:16:03.307 "lvol": { 00:16:03.307 "lvol_store_uuid": "5e40073e-0166-469f-838c-bb93d057c983", 00:16:03.307 "base_bdev": "nvme0n1", 00:16:03.307 "thin_provision": true, 00:16:03.307 "num_allocated_clusters": 0, 00:16:03.307 "snapshot": false, 00:16:03.307 "clone": false, 00:16:03.307 "esnap_clone": false 00:16:03.307 } 00:16:03.307 } 00:16:03.307 } 00:16:03.307 ]' 00:16:03.307 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:16:03.569 11:32:02 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d b283d828-5007-4b2b-abbb-ca9c6bf724aa -c nvc0n1p0 --l2p_dram_limit 60 00:16:03.569 [2024-11-05 11:32:02.821771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.821932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:03.569 [2024-11-05 11:32:02.821952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:03.569 [2024-11-05 11:32:02.821959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.822015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.822023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:03.569 [2024-11-05 11:32:02.822030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:16:03.569 [2024-11-05 11:32:02.822038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.822070] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:03.569 [2024-11-05 11:32:02.822655] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:03.569 [2024-11-05 11:32:02.822674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.822680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:03.569 [2024-11-05 11:32:02.822688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:16:03.569 [2024-11-05 11:32:02.822693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.822750] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID ad1e829e-2f51-4fb6-87af-455dad121d66 00:16:03.569 [2024-11-05 11:32:02.823749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.823773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:03.569 [2024-11-05 11:32:02.823783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:16:03.569 [2024-11-05 11:32:02.823790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.828471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.828500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:03.569 [2024-11-05 11:32:02.828508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.628 ms 00:16:03.569 [2024-11-05 11:32:02.828515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.828590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.828601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:03.569 [2024-11-05 11:32:02.828607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:03.569 [2024-11-05 11:32:02.828617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.828658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.828667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:03.569 [2024-11-05 11:32:02.828674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:16:03.569 [2024-11-05 11:32:02.828681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.828701] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:03.569 [2024-11-05 11:32:02.831574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.831598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:03.569 [2024-11-05 11:32:02.831608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.875 ms 00:16:03.569 [2024-11-05 11:32:02.831615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.831644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.831652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:03.569 [2024-11-05 11:32:02.831660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:16:03.569 [2024-11-05 11:32:02.831665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.831689] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:03.569 [2024-11-05 11:32:02.831813] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:03.569 [2024-11-05 11:32:02.831828] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:03.569 [2024-11-05 11:32:02.831837] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:03.569 [2024-11-05 11:32:02.831846] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:03.569 [2024-11-05 11:32:02.831853] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:03.569 [2024-11-05 11:32:02.831860] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:03.569 [2024-11-05 11:32:02.831866] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:03.569 [2024-11-05 11:32:02.831873] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:03.569 [2024-11-05 11:32:02.831878] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:03.569 [2024-11-05 11:32:02.831885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.831891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:03.569 [2024-11-05 11:32:02.831901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:16:03.569 [2024-11-05 11:32:02.831906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.831977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.569 [2024-11-05 11:32:02.831983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:03.569 [2024-11-05 11:32:02.831991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:16:03.569 [2024-11-05 11:32:02.831996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.569 [2024-11-05 11:32:02.832076] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:03.569 [2024-11-05 11:32:02.832085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:03.569 [2024-11-05 11:32:02.832092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:03.569 [2024-11-05 11:32:02.832111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832123] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:03.569 [2024-11-05 11:32:02.832130] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832137] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.569 [2024-11-05 11:32:02.832144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:03.569 [2024-11-05 11:32:02.832149] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:03.569 [2024-11-05 11:32:02.832155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:03.569 [2024-11-05 11:32:02.832160] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:03.569 [2024-11-05 11:32:02.832167] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:03.569 [2024-11-05 11:32:02.832172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:03.569 [2024-11-05 11:32:02.832186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:03.569 [2024-11-05 11:32:02.832203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:03.569 [2024-11-05 11:32:02.832220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:03.569 [2024-11-05 11:32:02.832237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:03.569 [2024-11-05 11:32:02.832253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:03.569 [2024-11-05 11:32:02.832260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:03.569 [2024-11-05 11:32:02.832264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:03.570 [2024-11-05 11:32:02.832272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:03.570 [2024-11-05 11:32:02.832276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.570 [2024-11-05 11:32:02.832283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:03.570 [2024-11-05 11:32:02.832298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:03.570 [2024-11-05 11:32:02.832304] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:03.570 [2024-11-05 11:32:02.832309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:03.570 [2024-11-05 11:32:02.832315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:03.570 [2024-11-05 11:32:02.832320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.570 [2024-11-05 11:32:02.832326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:03.570 [2024-11-05 11:32:02.832333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:03.570 [2024-11-05 11:32:02.832340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.570 [2024-11-05 11:32:02.832345] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:03.570 [2024-11-05 11:32:02.832352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:03.570 [2024-11-05 11:32:02.832358] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:03.570 [2024-11-05 11:32:02.832372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:03.570 [2024-11-05 11:32:02.832378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:03.570 [2024-11-05 11:32:02.832386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:03.570 [2024-11-05 11:32:02.832391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:03.570 [2024-11-05 11:32:02.832397] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:03.570 [2024-11-05 11:32:02.832402] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:03.570 [2024-11-05 11:32:02.832408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:03.570 [2024-11-05 11:32:02.832416] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:03.570 [2024-11-05 11:32:02.832425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:03.570 [2024-11-05 11:32:02.832438] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:03.570 [2024-11-05 11:32:02.832443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:03.570 [2024-11-05 11:32:02.832450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:03.570 [2024-11-05 11:32:02.832456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:03.570 [2024-11-05 11:32:02.832463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:03.570 [2024-11-05 11:32:02.832468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:03.570 [2024-11-05 11:32:02.832475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:03.570 [2024-11-05 11:32:02.832480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:03.570 [2024-11-05 11:32:02.832489] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832495] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832502] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:03.570 [2024-11-05 11:32:02.832520] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:03.570 [2024-11-05 11:32:02.832528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:03.570 [2024-11-05 11:32:02.832541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:03.570 [2024-11-05 11:32:02.832548] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:03.570 [2024-11-05 11:32:02.832555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:03.570 [2024-11-05 11:32:02.832560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:03.570 [2024-11-05 11:32:02.832567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:03.570 [2024-11-05 11:32:02.832574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:16:03.570 [2024-11-05 11:32:02.832581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:03.570 [2024-11-05 11:32:02.832633] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:03.570 [2024-11-05 11:32:02.832644] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:06.858 [2024-11-05 11:32:05.622993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.623054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:06.858 [2024-11-05 11:32:05.623069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2790.348 ms 00:16:06.858 [2024-11-05 11:32:05.623083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.648165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.648215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:06.858 [2024-11-05 11:32:05.648228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.872 ms 00:16:06.858 [2024-11-05 11:32:05.648237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.648356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.648368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:06.858 [2024-11-05 11:32:05.648376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:16:06.858 [2024-11-05 11:32:05.648387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.690642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.690689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:06.858 [2024-11-05 11:32:05.690704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.210 ms 00:16:06.858 [2024-11-05 11:32:05.690719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.690763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.690775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:06.858 [2024-11-05 11:32:05.690784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:06.858 [2024-11-05 11:32:05.690795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.691186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.691220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:06.858 [2024-11-05 11:32:05.691230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.312 ms 00:16:06.858 [2024-11-05 11:32:05.691241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.691372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.691384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:06.858 [2024-11-05 11:32:05.691392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:16:06.858 [2024-11-05 11:32:05.691405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.707252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.707285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:06.858 [2024-11-05 11:32:05.707296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.821 ms 00:16:06.858 [2024-11-05 11:32:05.707306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.718624] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:16:06.858 [2024-11-05 11:32:05.732860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.732902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:06.858 [2024-11-05 11:32:05.732915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.455 ms 00:16:06.858 [2024-11-05 11:32:05.732924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.784864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.784900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:06.858 [2024-11-05 11:32:05.784913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.901 ms 00:16:06.858 [2024-11-05 11:32:05.784922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.785086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.785095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:06.858 [2024-11-05 11:32:05.785108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.137 ms 00:16:06.858 [2024-11-05 11:32:05.785115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.807720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.807752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:06.858 [2024-11-05 11:32:05.807765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.553 ms 00:16:06.858 [2024-11-05 11:32:05.807775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.829785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.829826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:06.858 [2024-11-05 11:32:05.829839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.983 ms 00:16:06.858 [2024-11-05 11:32:05.829846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.830405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.830431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:06.858 [2024-11-05 11:32:05.830442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.536 ms 00:16:06.858 [2024-11-05 11:32:05.830449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.894984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.895016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:06.858 [2024-11-05 11:32:05.895032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.498 ms 00:16:06.858 [2024-11-05 11:32:05.895040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.919013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.919046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:06.858 [2024-11-05 11:32:05.919058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.891 ms 00:16:06.858 [2024-11-05 11:32:05.919067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.942061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.942091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:06.858 [2024-11-05 11:32:05.942103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.949 ms 00:16:06.858 [2024-11-05 11:32:05.942111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.965392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.965422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:06.858 [2024-11-05 11:32:05.965434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.237 ms 00:16:06.858 [2024-11-05 11:32:05.965442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.858 [2024-11-05 11:32:05.965485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.858 [2024-11-05 11:32:05.965494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:06.858 [2024-11-05 11:32:05.965506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:06.859 [2024-11-05 11:32:05.965513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.859 [2024-11-05 11:32:05.965597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:06.859 [2024-11-05 11:32:05.965607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:06.859 [2024-11-05 11:32:05.965617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:16:06.859 [2024-11-05 11:32:05.965624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:06.859 [2024-11-05 11:32:05.966477] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3144.280 ms, result 0 00:16:06.859 { 00:16:06.859 "name": "ftl0", 00:16:06.859 "uuid": "ad1e829e-2f51-4fb6-87af-455dad121d66" 00:16:06.859 } 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:16:06.859 11:32:05 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:07.146 11:32:06 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:16:07.146 [ 00:16:07.146 { 00:16:07.146 "name": "ftl0", 00:16:07.146 "aliases": [ 00:16:07.146 "ad1e829e-2f51-4fb6-87af-455dad121d66" 00:16:07.146 ], 00:16:07.146 "product_name": "FTL disk", 00:16:07.146 "block_size": 4096, 00:16:07.146 "num_blocks": 20971520, 00:16:07.146 "uuid": "ad1e829e-2f51-4fb6-87af-455dad121d66", 00:16:07.146 "assigned_rate_limits": { 00:16:07.146 "rw_ios_per_sec": 0, 00:16:07.146 "rw_mbytes_per_sec": 0, 00:16:07.146 "r_mbytes_per_sec": 0, 00:16:07.146 "w_mbytes_per_sec": 0 00:16:07.146 }, 00:16:07.146 "claimed": false, 00:16:07.146 "zoned": false, 00:16:07.146 "supported_io_types": { 00:16:07.146 "read": true, 00:16:07.146 "write": true, 00:16:07.146 "unmap": true, 00:16:07.146 "flush": true, 00:16:07.146 "reset": false, 00:16:07.146 "nvme_admin": false, 00:16:07.146 "nvme_io": false, 00:16:07.146 "nvme_io_md": false, 00:16:07.146 "write_zeroes": true, 00:16:07.146 "zcopy": false, 00:16:07.146 "get_zone_info": false, 00:16:07.146 "zone_management": false, 00:16:07.146 "zone_append": false, 00:16:07.146 "compare": false, 00:16:07.146 "compare_and_write": false, 00:16:07.146 "abort": false, 00:16:07.146 "seek_hole": false, 00:16:07.146 "seek_data": false, 00:16:07.146 "copy": false, 00:16:07.146 "nvme_iov_md": false 00:16:07.146 }, 00:16:07.146 "driver_specific": { 00:16:07.146 "ftl": { 00:16:07.146 "base_bdev": "b283d828-5007-4b2b-abbb-ca9c6bf724aa", 00:16:07.146 "cache": "nvc0n1p0" 00:16:07.146 } 00:16:07.146 } 00:16:07.146 } 00:16:07.146 ] 00:16:07.146 11:32:06 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:16:07.146 11:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:16:07.146 11:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:07.433 11:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:07.433 11:32:06 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:07.695 [2024-11-05 11:32:06.755185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.755231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:07.695 [2024-11-05 11:32:06.755245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:07.695 [2024-11-05 11:32:06.755254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.755285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:07.695 [2024-11-05 11:32:06.757843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.757874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:07.695 [2024-11-05 11:32:06.757886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.541 ms 00:16:07.695 [2024-11-05 11:32:06.757894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.758292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.758311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:07.695 [2024-11-05 11:32:06.758322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:16:07.695 [2024-11-05 11:32:06.758330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.761575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.761595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:07.695 [2024-11-05 11:32:06.761608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.224 ms 00:16:07.695 [2024-11-05 11:32:06.761617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.767795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.767830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:07.695 [2024-11-05 11:32:06.767842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:16:07.695 [2024-11-05 11:32:06.767850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.790980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.791117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:07.695 [2024-11-05 11:32:06.791138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.062 ms 00:16:07.695 [2024-11-05 11:32:06.791145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.822139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.822245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:07.695 [2024-11-05 11:32:06.822262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.942 ms 00:16:07.695 [2024-11-05 11:32:06.822268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.822409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.822417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:07.695 [2024-11-05 11:32:06.822425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:16:07.695 [2024-11-05 11:32:06.822431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.840021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.840047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:07.695 [2024-11-05 11:32:06.840057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.572 ms 00:16:07.695 [2024-11-05 11:32:06.840062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.857255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.857351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:07.695 [2024-11-05 11:32:06.857367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.158 ms 00:16:07.695 [2024-11-05 11:32:06.857372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.874438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.874464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:07.695 [2024-11-05 11:32:06.874473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.032 ms 00:16:07.695 [2024-11-05 11:32:06.874479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.891416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.695 [2024-11-05 11:32:06.891449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:07.695 [2024-11-05 11:32:06.891459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.863 ms 00:16:07.695 [2024-11-05 11:32:06.891464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.695 [2024-11-05 11:32:06.891497] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:07.695 [2024-11-05 11:32:06.891508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:07.695 [2024-11-05 11:32:06.891753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.891993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:07.696 [2024-11-05 11:32:06.892228] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:07.696 [2024-11-05 11:32:06.892236] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: ad1e829e-2f51-4fb6-87af-455dad121d66 00:16:07.696 [2024-11-05 11:32:06.892242] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:07.696 [2024-11-05 11:32:06.892250] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:07.696 [2024-11-05 11:32:06.892255] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:07.696 [2024-11-05 11:32:06.892262] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:07.696 [2024-11-05 11:32:06.892267] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:07.696 [2024-11-05 11:32:06.892276] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:07.696 [2024-11-05 11:32:06.892281] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:07.696 [2024-11-05 11:32:06.892287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:07.696 [2024-11-05 11:32:06.892292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:07.696 [2024-11-05 11:32:06.892299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.696 [2024-11-05 11:32:06.892305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:07.696 [2024-11-05 11:32:06.892312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.804 ms 00:16:07.696 [2024-11-05 11:32:06.892318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.696 [2024-11-05 11:32:06.901972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.696 [2024-11-05 11:32:06.901996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:07.696 [2024-11-05 11:32:06.902006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.624 ms 00:16:07.696 [2024-11-05 11:32:06.902014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.696 [2024-11-05 11:32:06.902283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:07.696 [2024-11-05 11:32:06.902289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:07.696 [2024-11-05 11:32:06.902297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.246 ms 00:16:07.696 [2024-11-05 11:32:06.902302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.696 [2024-11-05 11:32:06.936604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.696 [2024-11-05 11:32:06.936634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:07.696 [2024-11-05 11:32:06.936645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.696 [2024-11-05 11:32:06.936651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.696 [2024-11-05 11:32:06.936702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.696 [2024-11-05 11:32:06.936709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:07.696 [2024-11-05 11:32:06.936716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.696 [2024-11-05 11:32:06.936722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.696 [2024-11-05 11:32:06.936798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.696 [2024-11-05 11:32:06.936825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:07.697 [2024-11-05 11:32:06.936833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.697 [2024-11-05 11:32:06.936841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.697 [2024-11-05 11:32:06.936864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.697 [2024-11-05 11:32:06.936870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:07.697 [2024-11-05 11:32:06.936877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.697 [2024-11-05 11:32:06.936883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.957 [2024-11-05 11:32:06.999425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.957 [2024-11-05 11:32:06.999579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:07.957 [2024-11-05 11:32:06.999596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.957 [2024-11-05 11:32:06.999605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.957 [2024-11-05 11:32:07.047877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.957 [2024-11-05 11:32:07.047913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:07.957 [2024-11-05 11:32:07.047924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.957 [2024-11-05 11:32:07.047930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.957 [2024-11-05 11:32:07.047994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.957 [2024-11-05 11:32:07.048001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:07.957 [2024-11-05 11:32:07.048009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.957 [2024-11-05 11:32:07.048015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.957 [2024-11-05 11:32:07.048076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.957 [2024-11-05 11:32:07.048083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:07.957 [2024-11-05 11:32:07.048091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.958 [2024-11-05 11:32:07.048097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.958 [2024-11-05 11:32:07.048178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.958 [2024-11-05 11:32:07.048185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:07.958 [2024-11-05 11:32:07.048193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.958 [2024-11-05 11:32:07.048199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.958 [2024-11-05 11:32:07.048236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.958 [2024-11-05 11:32:07.048245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:07.958 [2024-11-05 11:32:07.048255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.958 [2024-11-05 11:32:07.048261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.958 [2024-11-05 11:32:07.048293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.958 [2024-11-05 11:32:07.048300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:07.958 [2024-11-05 11:32:07.048307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.958 [2024-11-05 11:32:07.048313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.958 [2024-11-05 11:32:07.048355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:07.958 [2024-11-05 11:32:07.048362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:07.958 [2024-11-05 11:32:07.048370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:07.958 [2024-11-05 11:32:07.048375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:07.958 [2024-11-05 11:32:07.048499] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 293.302 ms, result 0 00:16:07.958 true 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72286 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 72286 ']' 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 72286 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72286 00:16:07.958 killing process with pid 72286 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72286' 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 72286 00:16:07.958 11:32:07 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 72286 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:13.237 11:32:11 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:13.238 11:32:11 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:13.238 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:13.238 fio-3.35 00:16:13.238 Starting 1 thread 00:16:17.479 00:16:17.479 test: (groupid=0, jobs=1): err= 0: pid=72470: Tue Nov 5 11:32:16 2024 00:16:17.479 read: IOPS=1228, BW=81.6MiB/s (85.5MB/s)(255MiB/3120msec) 00:16:17.479 slat (nsec): min=2975, max=16425, avg=3786.73, stdev=1525.28 00:16:17.479 clat (usec): min=249, max=1052, avg=371.41, stdev=121.38 00:16:17.479 lat (usec): min=252, max=1056, avg=375.20, stdev=121.61 00:16:17.479 clat percentiles (usec): 00:16:17.479 | 1.00th=[ 297], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 322], 00:16:17.479 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 322], 60.00th=[ 326], 00:16:17.479 | 70.00th=[ 334], 80.00th=[ 388], 90.00th=[ 469], 95.00th=[ 660], 00:16:17.479 | 99.00th=[ 922], 99.50th=[ 930], 99.90th=[ 963], 99.95th=[ 996], 00:16:17.479 | 99.99th=[ 1057] 00:16:17.479 write: IOPS=1237, BW=82.2MiB/s (86.2MB/s)(256MiB/3115msec); 0 zone resets 00:16:17.479 slat (nsec): min=13752, max=90777, avg=16995.43, stdev=2752.90 00:16:17.479 clat (usec): min=296, max=2412, avg=405.87, stdev=140.68 00:16:17.479 lat (usec): min=316, max=2433, avg=422.86, stdev=140.96 00:16:17.479 clat percentiles (usec): 00:16:17.479 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 347], 00:16:17.479 | 30.00th=[ 347], 40.00th=[ 347], 50.00th=[ 351], 60.00th=[ 355], 00:16:17.479 | 70.00th=[ 363], 80.00th=[ 416], 90.00th=[ 553], 95.00th=[ 742], 00:16:17.479 | 99.00th=[ 955], 99.50th=[ 1012], 99.90th=[ 1090], 99.95th=[ 1516], 00:16:17.479 | 99.99th=[ 2409] 00:16:17.479 bw ( KiB/s): min=54808, max=94248, per=100.00%, avg=84229.33, stdev=14636.20, samples=6 00:16:17.479 iops : min= 806, max= 1386, avg=1238.67, stdev=215.24, samples=6 00:16:17.479 lat (usec) : 250=0.01%, 500=89.48%, 750=6.70%, 1000=3.51% 00:16:17.479 lat (msec) : 2=0.29%, 4=0.01% 00:16:17.479 cpu : usr=99.33%, sys=0.06%, ctx=6, majf=0, minf=1169 00:16:17.479 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:17.479 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.479 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:17.479 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:17.479 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:17.479 00:16:17.479 Run status group 0 (all jobs): 00:16:17.479 READ: bw=81.6MiB/s (85.5MB/s), 81.6MiB/s-81.6MiB/s (85.5MB/s-85.5MB/s), io=255MiB (267MB), run=3120-3120msec 00:16:17.479 WRITE: bw=82.2MiB/s (86.2MB/s), 82.2MiB/s-82.2MiB/s (86.2MB/s-86.2MB/s), io=256MiB (269MB), run=3115-3115msec 00:16:18.866 ----------------------------------------------------- 00:16:18.866 Suppressions used: 00:16:18.866 count bytes template 00:16:18.866 1 5 /usr/src/fio/parse.c 00:16:18.866 1 8 libtcmalloc_minimal.so 00:16:18.866 1 904 libcrypto.so 00:16:18.866 ----------------------------------------------------- 00:16:18.866 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:18.866 11:32:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:18.866 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:18.866 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:18.866 fio-3.35 00:16:18.866 Starting 2 threads 00:16:45.449 00:16:45.449 first_half: (groupid=0, jobs=1): err= 0: pid=72562: Tue Nov 5 11:32:41 2024 00:16:45.449 read: IOPS=2874, BW=11.2MiB/s (11.8MB/s)(255MiB/22696msec) 00:16:45.449 slat (nsec): min=3012, max=19913, avg=3677.90, stdev=658.83 00:16:45.449 clat (usec): min=573, max=273751, avg=32597.96, stdev=15613.16 00:16:45.449 lat (usec): min=577, max=273754, avg=32601.64, stdev=15613.17 00:16:45.449 clat percentiles (msec): 00:16:45.449 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:16:45.449 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:16:45.449 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 40], 00:16:45.449 | 99.00th=[ 117], 99.50th=[ 155], 99.90th=[ 192], 99.95th=[ 236], 00:16:45.449 | 99.99th=[ 268] 00:16:45.449 write: IOPS=3681, BW=14.4MiB/s (15.1MB/s)(256MiB/17802msec); 0 zone resets 00:16:45.449 slat (usec): min=3, max=322, avg= 5.33, stdev= 2.65 00:16:45.449 clat (usec): min=356, max=80585, avg=11861.76, stdev=19770.15 00:16:45.449 lat (usec): min=363, max=80590, avg=11867.09, stdev=19770.18 00:16:45.449 clat percentiles (usec): 00:16:45.449 | 1.00th=[ 644], 5.00th=[ 758], 10.00th=[ 898], 20.00th=[ 1074], 00:16:45.449 | 30.00th=[ 1352], 40.00th=[ 2802], 50.00th=[ 4113], 60.00th=[ 5342], 00:16:45.449 | 70.00th=[ 6915], 80.00th=[10552], 90.00th=[58459], 95.00th=[63177], 00:16:45.449 | 99.00th=[68682], 99.50th=[71828], 99.90th=[78119], 99.95th=[79168], 00:16:45.449 | 99.99th=[80217] 00:16:45.449 bw ( KiB/s): min= 736, max=41472, per=71.21%, avg=20971.52, stdev=12725.83, samples=25 00:16:45.449 iops : min= 184, max=10368, avg=5242.88, stdev=3181.46, samples=25 00:16:45.449 lat (usec) : 500=0.01%, 750=2.31%, 1000=5.65% 00:16:45.449 lat (msec) : 2=9.86%, 4=7.37%, 10=14.83%, 20=3.66%, 50=48.66% 00:16:45.449 lat (msec) : 100=6.91%, 250=0.72%, 500=0.02% 00:16:45.449 cpu : usr=99.27%, sys=0.12%, ctx=30, majf=0, minf=5511 00:16:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.449 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.449 issued rwts: total=65239,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.449 second_half: (groupid=0, jobs=1): err= 0: pid=72563: Tue Nov 5 11:32:41 2024 00:16:45.449 read: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(254MiB/22783msec) 00:16:45.449 slat (nsec): min=2982, max=18079, avg=3673.83, stdev=612.84 00:16:45.449 clat (usec): min=622, max=210771, avg=32429.15, stdev=16127.86 00:16:45.449 lat (usec): min=626, max=210776, avg=32432.82, stdev=16127.89 00:16:45.449 clat percentiles (msec): 00:16:45.449 | 1.00th=[ 4], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 30], 00:16:45.449 | 30.00th=[ 30], 40.00th=[ 30], 50.00th=[ 31], 60.00th=[ 31], 00:16:45.449 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 39], 00:16:45.449 | 99.00th=[ 129], 99.50th=[ 150], 99.90th=[ 192], 99.95th=[ 199], 00:16:45.449 | 99.99th=[ 203] 00:16:45.449 write: IOPS=3868, BW=15.1MiB/s (15.8MB/s)(256MiB/16943msec); 0 zone resets 00:16:45.449 slat (usec): min=3, max=446, avg= 5.43, stdev= 2.93 00:16:45.449 clat (usec): min=355, max=80967, avg=12244.37, stdev=20179.22 00:16:45.449 lat (usec): min=359, max=80973, avg=12249.80, stdev=20179.26 00:16:45.449 clat percentiles (usec): 00:16:45.449 | 1.00th=[ 635], 5.00th=[ 758], 10.00th=[ 881], 20.00th=[ 1037], 00:16:45.449 | 30.00th=[ 1172], 40.00th=[ 1434], 50.00th=[ 2999], 60.00th=[ 4948], 00:16:45.449 | 70.00th=[ 8586], 80.00th=[11731], 90.00th=[58459], 95.00th=[63701], 00:16:45.449 | 99.00th=[69731], 99.50th=[72877], 99.90th=[79168], 99.95th=[79168], 00:16:45.449 | 99.99th=[80217] 00:16:45.449 bw ( KiB/s): min= 1080, max=59096, per=71.21%, avg=20971.52, stdev=14777.06, samples=25 00:16:45.449 iops : min= 270, max=14774, avg=5242.88, stdev=3694.26, samples=25 00:16:45.449 lat (usec) : 500=0.01%, 750=2.35%, 1000=6.50% 00:16:45.449 lat (msec) : 2=14.20%, 4=5.13%, 10=10.91%, 20=4.11%, 50=48.95% 00:16:45.449 lat (msec) : 100=6.99%, 250=0.86% 00:16:45.449 cpu : usr=99.48%, sys=0.09%, ctx=31, majf=0, minf=5590 00:16:45.449 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:45.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:45.449 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:45.449 issued rwts: total=65149,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:45.449 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:45.449 00:16:45.449 Run status group 0 (all jobs): 00:16:45.449 READ: bw=22.4MiB/s (23.4MB/s), 11.2MiB/s-11.2MiB/s (11.7MB/s-11.8MB/s), io=509MiB (534MB), run=22696-22783msec 00:16:45.449 WRITE: bw=28.8MiB/s (30.2MB/s), 14.4MiB/s-15.1MiB/s (15.1MB/s-15.8MB/s), io=512MiB (537MB), run=16943-17802msec 00:16:45.449 ----------------------------------------------------- 00:16:45.449 Suppressions used: 00:16:45.449 count bytes template 00:16:45.449 2 10 /usr/src/fio/parse.c 00:16:45.449 1 96 /usr/src/fio/iolog.c 00:16:45.449 1 8 libtcmalloc_minimal.so 00:16:45.449 1 904 libcrypto.so 00:16:45.449 ----------------------------------------------------- 00:16:45.449 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:45.449 11:32:44 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:45.450 11:32:44 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:45.450 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:45.450 fio-3.35 00:16:45.450 Starting 1 thread 00:17:00.353 00:17:00.353 test: (groupid=0, jobs=1): err= 0: pid=72868: Tue Nov 5 11:32:58 2024 00:17:00.353 read: IOPS=7380, BW=28.8MiB/s (30.2MB/s)(255MiB/8834msec) 00:17:00.353 slat (nsec): min=3025, max=67123, avg=4364.59, stdev=1487.10 00:17:00.353 clat (usec): min=466, max=32994, avg=17334.68, stdev=3225.05 00:17:00.353 lat (usec): min=473, max=32999, avg=17339.05, stdev=3225.45 00:17:00.353 clat percentiles (usec): 00:17:00.353 | 1.00th=[13566], 5.00th=[14091], 10.00th=[14615], 20.00th=[14877], 00:17:00.353 | 30.00th=[15008], 40.00th=[15401], 50.00th=[16319], 60.00th=[16909], 00:17:00.354 | 70.00th=[18220], 80.00th=[20055], 90.00th=[22414], 95.00th=[23987], 00:17:00.354 | 99.00th=[26870], 99.50th=[27919], 99.90th=[29492], 99.95th=[30278], 00:17:00.354 | 99.99th=[32113] 00:17:00.354 write: IOPS=17.1k, BW=66.7MiB/s (70.0MB/s)(256MiB/3836msec); 0 zone resets 00:17:00.354 slat (usec): min=4, max=491, avg= 5.56, stdev= 3.43 00:17:00.354 clat (usec): min=412, max=45201, avg=7451.76, stdev=9383.04 00:17:00.354 lat (usec): min=417, max=45206, avg=7457.32, stdev=9383.02 00:17:00.354 clat percentiles (usec): 00:17:00.354 | 1.00th=[ 586], 5.00th=[ 709], 10.00th=[ 799], 20.00th=[ 930], 00:17:00.354 | 30.00th=[ 1057], 40.00th=[ 1401], 50.00th=[ 4490], 60.00th=[ 5276], 00:17:00.354 | 70.00th=[ 6521], 80.00th=[ 9372], 90.00th=[27395], 95.00th=[29230], 00:17:00.354 | 99.00th=[31851], 99.50th=[35914], 99.90th=[38536], 99.95th=[39060], 00:17:00.354 | 99.99th=[44303] 00:17:00.354 bw ( KiB/s): min=38736, max=98640, per=95.90%, avg=65536.00, stdev=18813.48, samples=8 00:17:00.354 iops : min= 9684, max=24660, avg=16384.00, stdev=4703.37, samples=8 00:17:00.354 lat (usec) : 500=0.05%, 750=3.44%, 1000=9.71% 00:17:00.354 lat (msec) : 2=7.50%, 4=2.08%, 10=17.90%, 20=41.42%, 50=17.90% 00:17:00.354 cpu : usr=98.37%, sys=0.41%, ctx=33, majf=0, minf=5566 00:17:00.354 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:17:00.354 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:00.354 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:00.354 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:00.354 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:00.354 00:17:00.354 Run status group 0 (all jobs): 00:17:00.354 READ: bw=28.8MiB/s (30.2MB/s), 28.8MiB/s-28.8MiB/s (30.2MB/s-30.2MB/s), io=255MiB (267MB), run=8834-8834msec 00:17:00.354 WRITE: bw=66.7MiB/s (70.0MB/s), 66.7MiB/s-66.7MiB/s (70.0MB/s-70.0MB/s), io=256MiB (268MB), run=3836-3836msec 00:17:00.615 ----------------------------------------------------- 00:17:00.615 Suppressions used: 00:17:00.615 count bytes template 00:17:00.615 1 5 /usr/src/fio/parse.c 00:17:00.615 2 192 /usr/src/fio/iolog.c 00:17:00.615 1 8 libtcmalloc_minimal.so 00:17:00.615 1 904 libcrypto.so 00:17:00.615 ----------------------------------------------------- 00:17:00.615 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:00.615 Remove shared memory files 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57082 /dev/shm/spdk_tgt_trace.pid71199 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:17:00.615 ************************************ 00:17:00.615 END TEST ftl_fio_basic 00:17:00.615 ************************************ 00:17:00.615 00:17:00.615 real 1m0.920s 00:17:00.615 user 2m11.031s 00:17:00.615 sys 0m2.800s 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:00.615 11:32:59 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 11:32:59 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:00.878 11:32:59 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:00.878 11:32:59 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:00.878 11:32:59 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:00.878 ************************************ 00:17:00.878 START TEST ftl_bdevperf 00:17:00.878 ************************************ 00:17:00.878 11:32:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:17:00.878 * Looking for test storage... 00:17:00.878 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.878 11:32:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:00.878 11:32:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:00.878 11:32:59 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:00.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.878 --rc genhtml_branch_coverage=1 00:17:00.878 --rc genhtml_function_coverage=1 00:17:00.878 --rc genhtml_legend=1 00:17:00.878 --rc geninfo_all_blocks=1 00:17:00.878 --rc geninfo_unexecuted_blocks=1 00:17:00.878 00:17:00.878 ' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:17:00.878 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73105 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73105 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 73105 ']' 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:00.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:00.879 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:00.879 [2024-11-05 11:33:00.133055] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:00.879 [2024-11-05 11:33:00.133254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73105 ] 00:17:01.140 [2024-11-05 11:33:00.289614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.140 [2024-11-05 11:33:00.405993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:17:02.085 11:33:00 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:02.085 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:02.346 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:02.346 { 00:17:02.346 "name": "nvme0n1", 00:17:02.346 "aliases": [ 00:17:02.346 "9c5b7cf8-e1f3-401f-b86e-0176a65f7c53" 00:17:02.346 ], 00:17:02.346 "product_name": "NVMe disk", 00:17:02.346 "block_size": 4096, 00:17:02.346 "num_blocks": 1310720, 00:17:02.346 "uuid": "9c5b7cf8-e1f3-401f-b86e-0176a65f7c53", 00:17:02.346 "numa_id": -1, 00:17:02.346 "assigned_rate_limits": { 00:17:02.346 "rw_ios_per_sec": 0, 00:17:02.346 "rw_mbytes_per_sec": 0, 00:17:02.346 "r_mbytes_per_sec": 0, 00:17:02.346 "w_mbytes_per_sec": 0 00:17:02.346 }, 00:17:02.346 "claimed": true, 00:17:02.346 "claim_type": "read_many_write_one", 00:17:02.346 "zoned": false, 00:17:02.346 "supported_io_types": { 00:17:02.346 "read": true, 00:17:02.346 "write": true, 00:17:02.346 "unmap": true, 00:17:02.346 "flush": true, 00:17:02.347 "reset": true, 00:17:02.347 "nvme_admin": true, 00:17:02.347 "nvme_io": true, 00:17:02.347 "nvme_io_md": false, 00:17:02.347 "write_zeroes": true, 00:17:02.347 "zcopy": false, 00:17:02.347 "get_zone_info": false, 00:17:02.347 "zone_management": false, 00:17:02.347 "zone_append": false, 00:17:02.347 "compare": true, 00:17:02.347 "compare_and_write": false, 00:17:02.347 "abort": true, 00:17:02.347 "seek_hole": false, 00:17:02.347 "seek_data": false, 00:17:02.347 "copy": true, 00:17:02.347 "nvme_iov_md": false 00:17:02.347 }, 00:17:02.347 "driver_specific": { 00:17:02.347 "nvme": [ 00:17:02.347 { 00:17:02.347 "pci_address": "0000:00:11.0", 00:17:02.347 "trid": { 00:17:02.347 "trtype": "PCIe", 00:17:02.347 "traddr": "0000:00:11.0" 00:17:02.347 }, 00:17:02.347 "ctrlr_data": { 00:17:02.347 "cntlid": 0, 00:17:02.347 "vendor_id": "0x1b36", 00:17:02.347 "model_number": "QEMU NVMe Ctrl", 00:17:02.347 "serial_number": "12341", 00:17:02.347 "firmware_revision": "8.0.0", 00:17:02.347 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:02.347 "oacs": { 00:17:02.347 "security": 0, 00:17:02.347 "format": 1, 00:17:02.347 "firmware": 0, 00:17:02.347 "ns_manage": 1 00:17:02.347 }, 00:17:02.347 "multi_ctrlr": false, 00:17:02.347 "ana_reporting": false 00:17:02.347 }, 00:17:02.347 "vs": { 00:17:02.347 "nvme_version": "1.4" 00:17:02.347 }, 00:17:02.347 "ns_data": { 00:17:02.347 "id": 1, 00:17:02.347 "can_share": false 00:17:02.347 } 00:17:02.347 } 00:17:02.347 ], 00:17:02.347 "mp_policy": "active_passive" 00:17:02.347 } 00:17:02.347 } 00:17:02.347 ]' 00:17:02.347 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:02.347 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:02.347 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=5e40073e-0166-469f-838c-bb93d057c983 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:17:02.608 11:33:01 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5e40073e-0166-469f-838c-bb93d057c983 00:17:02.869 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:03.131 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=42306015-8e58-4cac-8e17-327ad21a969c 00:17:03.131 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 42306015-8e58-4cac-8e17-327ad21a969c 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.393 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.394 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:03.394 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:03.394 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:03.394 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:03.656 { 00:17:03.656 "name": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:03.656 "aliases": [ 00:17:03.656 "lvs/nvme0n1p0" 00:17:03.656 ], 00:17:03.656 "product_name": "Logical Volume", 00:17:03.656 "block_size": 4096, 00:17:03.656 "num_blocks": 26476544, 00:17:03.656 "uuid": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:03.656 "assigned_rate_limits": { 00:17:03.656 "rw_ios_per_sec": 0, 00:17:03.656 "rw_mbytes_per_sec": 0, 00:17:03.656 "r_mbytes_per_sec": 0, 00:17:03.656 "w_mbytes_per_sec": 0 00:17:03.656 }, 00:17:03.656 "claimed": false, 00:17:03.656 "zoned": false, 00:17:03.656 "supported_io_types": { 00:17:03.656 "read": true, 00:17:03.656 "write": true, 00:17:03.656 "unmap": true, 00:17:03.656 "flush": false, 00:17:03.656 "reset": true, 00:17:03.656 "nvme_admin": false, 00:17:03.656 "nvme_io": false, 00:17:03.656 "nvme_io_md": false, 00:17:03.656 "write_zeroes": true, 00:17:03.656 "zcopy": false, 00:17:03.656 "get_zone_info": false, 00:17:03.656 "zone_management": false, 00:17:03.656 "zone_append": false, 00:17:03.656 "compare": false, 00:17:03.656 "compare_and_write": false, 00:17:03.656 "abort": false, 00:17:03.656 "seek_hole": true, 00:17:03.656 "seek_data": true, 00:17:03.656 "copy": false, 00:17:03.656 "nvme_iov_md": false 00:17:03.656 }, 00:17:03.656 "driver_specific": { 00:17:03.656 "lvol": { 00:17:03.656 "lvol_store_uuid": "42306015-8e58-4cac-8e17-327ad21a969c", 00:17:03.656 "base_bdev": "nvme0n1", 00:17:03.656 "thin_provision": true, 00:17:03.656 "num_allocated_clusters": 0, 00:17:03.656 "snapshot": false, 00:17:03.656 "clone": false, 00:17:03.656 "esnap_clone": false 00:17:03.656 } 00:17:03.656 } 00:17:03.656 } 00:17:03.656 ]' 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:17:03.656 11:33:02 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3a34d499-baca-4a25-950e-054a65d08a0c 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:03.918 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:04.179 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:04.179 { 00:17:04.179 "name": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:04.179 "aliases": [ 00:17:04.179 "lvs/nvme0n1p0" 00:17:04.179 ], 00:17:04.179 "product_name": "Logical Volume", 00:17:04.179 "block_size": 4096, 00:17:04.179 "num_blocks": 26476544, 00:17:04.179 "uuid": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:04.179 "assigned_rate_limits": { 00:17:04.179 "rw_ios_per_sec": 0, 00:17:04.179 "rw_mbytes_per_sec": 0, 00:17:04.179 "r_mbytes_per_sec": 0, 00:17:04.179 "w_mbytes_per_sec": 0 00:17:04.179 }, 00:17:04.179 "claimed": false, 00:17:04.179 "zoned": false, 00:17:04.179 "supported_io_types": { 00:17:04.179 "read": true, 00:17:04.179 "write": true, 00:17:04.179 "unmap": true, 00:17:04.179 "flush": false, 00:17:04.179 "reset": true, 00:17:04.179 "nvme_admin": false, 00:17:04.179 "nvme_io": false, 00:17:04.179 "nvme_io_md": false, 00:17:04.179 "write_zeroes": true, 00:17:04.179 "zcopy": false, 00:17:04.179 "get_zone_info": false, 00:17:04.179 "zone_management": false, 00:17:04.179 "zone_append": false, 00:17:04.179 "compare": false, 00:17:04.179 "compare_and_write": false, 00:17:04.179 "abort": false, 00:17:04.179 "seek_hole": true, 00:17:04.179 "seek_data": true, 00:17:04.179 "copy": false, 00:17:04.179 "nvme_iov_md": false 00:17:04.179 }, 00:17:04.179 "driver_specific": { 00:17:04.179 "lvol": { 00:17:04.179 "lvol_store_uuid": "42306015-8e58-4cac-8e17-327ad21a969c", 00:17:04.179 "base_bdev": "nvme0n1", 00:17:04.179 "thin_provision": true, 00:17:04.180 "num_allocated_clusters": 0, 00:17:04.180 "snapshot": false, 00:17:04.180 "clone": false, 00:17:04.180 "esnap_clone": false 00:17:04.180 } 00:17:04.180 } 00:17:04.180 } 00:17:04.180 ]' 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:17:04.180 11:33:03 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=3a34d499-baca-4a25-950e-054a65d08a0c 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:17:04.441 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3a34d499-baca-4a25-950e-054a65d08a0c 00:17:04.702 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:04.702 { 00:17:04.702 "name": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:04.702 "aliases": [ 00:17:04.702 "lvs/nvme0n1p0" 00:17:04.702 ], 00:17:04.702 "product_name": "Logical Volume", 00:17:04.702 "block_size": 4096, 00:17:04.702 "num_blocks": 26476544, 00:17:04.702 "uuid": "3a34d499-baca-4a25-950e-054a65d08a0c", 00:17:04.702 "assigned_rate_limits": { 00:17:04.702 "rw_ios_per_sec": 0, 00:17:04.702 "rw_mbytes_per_sec": 0, 00:17:04.702 "r_mbytes_per_sec": 0, 00:17:04.702 "w_mbytes_per_sec": 0 00:17:04.702 }, 00:17:04.702 "claimed": false, 00:17:04.702 "zoned": false, 00:17:04.702 "supported_io_types": { 00:17:04.702 "read": true, 00:17:04.702 "write": true, 00:17:04.702 "unmap": true, 00:17:04.702 "flush": false, 00:17:04.702 "reset": true, 00:17:04.702 "nvme_admin": false, 00:17:04.702 "nvme_io": false, 00:17:04.702 "nvme_io_md": false, 00:17:04.702 "write_zeroes": true, 00:17:04.702 "zcopy": false, 00:17:04.702 "get_zone_info": false, 00:17:04.702 "zone_management": false, 00:17:04.702 "zone_append": false, 00:17:04.702 "compare": false, 00:17:04.702 "compare_and_write": false, 00:17:04.702 "abort": false, 00:17:04.702 "seek_hole": true, 00:17:04.702 "seek_data": true, 00:17:04.702 "copy": false, 00:17:04.702 "nvme_iov_md": false 00:17:04.702 }, 00:17:04.702 "driver_specific": { 00:17:04.702 "lvol": { 00:17:04.702 "lvol_store_uuid": "42306015-8e58-4cac-8e17-327ad21a969c", 00:17:04.702 "base_bdev": "nvme0n1", 00:17:04.702 "thin_provision": true, 00:17:04.702 "num_allocated_clusters": 0, 00:17:04.702 "snapshot": false, 00:17:04.702 "clone": false, 00:17:04.703 "esnap_clone": false 00:17:04.703 } 00:17:04.703 } 00:17:04.703 } 00:17:04.703 ]' 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:17:04.703 11:33:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3a34d499-baca-4a25-950e-054a65d08a0c -c nvc0n1p0 --l2p_dram_limit 20 00:17:04.964 [2024-11-05 11:33:04.018827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.964 [2024-11-05 11:33:04.018965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:04.965 [2024-11-05 11:33:04.018983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:04.965 [2024-11-05 11:33:04.018991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.019040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.019050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:04.965 [2024-11-05 11:33:04.019056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:04.965 [2024-11-05 11:33:04.019065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.019079] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:04.965 [2024-11-05 11:33:04.019660] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:04.965 [2024-11-05 11:33:04.019673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.019683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:04.965 [2024-11-05 11:33:04.019690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:17:04.965 [2024-11-05 11:33:04.019697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.019745] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 159893d6-ef4e-4e3b-9384-701979427b8a 00:17:04.965 [2024-11-05 11:33:04.020690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.020720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:04.965 [2024-11-05 11:33:04.020729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:04.965 [2024-11-05 11:33:04.020737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.025504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.025531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:04.965 [2024-11-05 11:33:04.025540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.734 ms 00:17:04.965 [2024-11-05 11:33:04.025546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.025613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.025620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:04.965 [2024-11-05 11:33:04.025631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:04.965 [2024-11-05 11:33:04.025637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.025678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.025685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:04.965 [2024-11-05 11:33:04.025693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:04.965 [2024-11-05 11:33:04.025699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.025715] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:04.965 [2024-11-05 11:33:04.028620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.028646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:04.965 [2024-11-05 11:33:04.028654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.911 ms 00:17:04.965 [2024-11-05 11:33:04.028662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.028686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.028696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:04.965 [2024-11-05 11:33:04.028702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:04.965 [2024-11-05 11:33:04.028709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.028719] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:04.965 [2024-11-05 11:33:04.028835] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:04.965 [2024-11-05 11:33:04.028846] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:04.965 [2024-11-05 11:33:04.028857] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:04.965 [2024-11-05 11:33:04.028865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:04.965 [2024-11-05 11:33:04.028873] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:04.965 [2024-11-05 11:33:04.028882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:04.965 [2024-11-05 11:33:04.028889] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:04.965 [2024-11-05 11:33:04.028894] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:04.965 [2024-11-05 11:33:04.028901] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:04.965 [2024-11-05 11:33:04.028907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.028913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:04.965 [2024-11-05 11:33:04.028919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:17:04.965 [2024-11-05 11:33:04.028928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.028990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.965 [2024-11-05 11:33:04.028998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:04.965 [2024-11-05 11:33:04.029004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:04.965 [2024-11-05 11:33:04.029012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.965 [2024-11-05 11:33:04.029079] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:04.965 [2024-11-05 11:33:04.029087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:04.965 [2024-11-05 11:33:04.029093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:04.965 [2024-11-05 11:33:04.029114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:04.965 [2024-11-05 11:33:04.029131] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.965 [2024-11-05 11:33:04.029143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:04.965 [2024-11-05 11:33:04.029150] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:04.965 [2024-11-05 11:33:04.029154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:04.965 [2024-11-05 11:33:04.029165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:04.965 [2024-11-05 11:33:04.029172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:04.965 [2024-11-05 11:33:04.029179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029184] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:04.965 [2024-11-05 11:33:04.029192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029197] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:04.965 [2024-11-05 11:33:04.029208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:04.965 [2024-11-05 11:33:04.029226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029236] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:04.965 [2024-11-05 11:33:04.029241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:04.965 [2024-11-05 11:33:04.029258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:04.965 [2024-11-05 11:33:04.029276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.965 [2024-11-05 11:33:04.029287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:04.965 [2024-11-05 11:33:04.029293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:04.965 [2024-11-05 11:33:04.029297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:04.965 [2024-11-05 11:33:04.029303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:04.965 [2024-11-05 11:33:04.029308] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:04.965 [2024-11-05 11:33:04.029315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:04.965 [2024-11-05 11:33:04.029326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:04.965 [2024-11-05 11:33:04.029330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029336] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:04.965 [2024-11-05 11:33:04.029342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:04.965 [2024-11-05 11:33:04.029349] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:04.965 [2024-11-05 11:33:04.029355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:04.965 [2024-11-05 11:33:04.029365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:04.965 [2024-11-05 11:33:04.029370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:04.965 [2024-11-05 11:33:04.029376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:04.966 [2024-11-05 11:33:04.029381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:04.966 [2024-11-05 11:33:04.029387] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:04.966 [2024-11-05 11:33:04.029392] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:04.966 [2024-11-05 11:33:04.029401] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:04.966 [2024-11-05 11:33:04.029409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:04.966 [2024-11-05 11:33:04.029421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:04.966 [2024-11-05 11:33:04.029428] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:04.966 [2024-11-05 11:33:04.029433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:04.966 [2024-11-05 11:33:04.029440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:04.966 [2024-11-05 11:33:04.029445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:04.966 [2024-11-05 11:33:04.029451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:04.966 [2024-11-05 11:33:04.029456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:04.966 [2024-11-05 11:33:04.029464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:04.966 [2024-11-05 11:33:04.029469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:04.966 [2024-11-05 11:33:04.029501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:04.966 [2024-11-05 11:33:04.029507] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029514] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:04.966 [2024-11-05 11:33:04.029519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:04.966 [2024-11-05 11:33:04.029526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:04.966 [2024-11-05 11:33:04.029532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:04.966 [2024-11-05 11:33:04.029539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:04.966 [2024-11-05 11:33:04.029545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:04.966 [2024-11-05 11:33:04.029553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:17:04.966 [2024-11-05 11:33:04.029559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:04.966 [2024-11-05 11:33:04.029595] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:04.966 [2024-11-05 11:33:04.029603] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:09.173 [2024-11-05 11:33:07.908498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.173 [2024-11-05 11:33:07.908743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:09.173 [2024-11-05 11:33:07.908863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3878.883 ms 00:17:09.173 [2024-11-05 11:33:07.908897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.173 [2024-11-05 11:33:07.940361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.173 [2024-11-05 11:33:07.940564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:09.173 [2024-11-05 11:33:07.940687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.173 ms 00:17:09.173 [2024-11-05 11:33:07.940714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.173 [2024-11-05 11:33:07.940880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.173 [2024-11-05 11:33:07.941026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:09.173 [2024-11-05 11:33:07.941058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:17:09.173 [2024-11-05 11:33:07.941079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.173 [2024-11-05 11:33:07.989734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.173 [2024-11-05 11:33:07.989966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:09.173 [2024-11-05 11:33:07.989997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.593 ms 00:17:09.173 [2024-11-05 11:33:07.990008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.173 [2024-11-05 11:33:07.990052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.173 [2024-11-05 11:33:07.990062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:09.173 [2024-11-05 11:33:07.990073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:09.173 [2024-11-05 11:33:07.990085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.173 [2024-11-05 11:33:07.990685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:07.990711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:09.174 [2024-11-05 11:33:07.990723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.520 ms 00:17:09.174 [2024-11-05 11:33:07.990731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:07.990873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:07.990883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:09.174 [2024-11-05 11:33:07.990898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:17:09.174 [2024-11-05 11:33:07.990906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.006410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.006466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:09.174 [2024-11-05 11:33:08.006480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.485 ms 00:17:09.174 [2024-11-05 11:33:08.006488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.019710] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:17:09.174 [2024-11-05 11:33:08.026628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.026676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:09.174 [2024-11-05 11:33:08.026688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.055 ms 00:17:09.174 [2024-11-05 11:33:08.026697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.126918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.126983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:09.174 [2024-11-05 11:33:08.126998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.192 ms 00:17:09.174 [2024-11-05 11:33:08.127009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.127192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.127209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:09.174 [2024-11-05 11:33:08.127219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:17:09.174 [2024-11-05 11:33:08.127230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.153315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.153506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:09.174 [2024-11-05 11:33:08.153529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.035 ms 00:17:09.174 [2024-11-05 11:33:08.153540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.178354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.178404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:09.174 [2024-11-05 11:33:08.178417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.788 ms 00:17:09.174 [2024-11-05 11:33:08.178426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.179076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.179102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:09.174 [2024-11-05 11:33:08.179112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:17:09.174 [2024-11-05 11:33:08.179122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.258165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.258226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:09.174 [2024-11-05 11:33:08.258239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.003 ms 00:17:09.174 [2024-11-05 11:33:08.258250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.285377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.285428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:09.174 [2024-11-05 11:33:08.285442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.042 ms 00:17:09.174 [2024-11-05 11:33:08.285453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.311631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.311679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:09.174 [2024-11-05 11:33:08.311691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.129 ms 00:17:09.174 [2024-11-05 11:33:08.311701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.338126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.338179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:09.174 [2024-11-05 11:33:08.338192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.379 ms 00:17:09.174 [2024-11-05 11:33:08.338202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.338253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.338272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:09.174 [2024-11-05 11:33:08.338281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:09.174 [2024-11-05 11:33:08.338291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.338381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:09.174 [2024-11-05 11:33:08.338395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:09.174 [2024-11-05 11:33:08.338405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:09.174 [2024-11-05 11:33:08.338415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:09.174 [2024-11-05 11:33:08.339545] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4320.249 ms, result 0 00:17:09.174 { 00:17:09.174 "name": "ftl0", 00:17:09.174 "uuid": "159893d6-ef4e-4e3b-9384-701979427b8a" 00:17:09.174 } 00:17:09.174 11:33:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:17:09.174 11:33:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:17:09.174 11:33:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:17:09.434 11:33:08 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:17:09.434 [2024-11-05 11:33:08.671730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:09.434 I/O size of 69632 is greater than zero copy threshold (65536). 00:17:09.434 Zero copy mechanism will not be used. 00:17:09.434 Running I/O for 4 seconds... 00:17:11.763 985.00 IOPS, 65.41 MiB/s [2024-11-05T11:33:11.980Z] 1127.00 IOPS, 74.84 MiB/s [2024-11-05T11:33:12.918Z] 1049.33 IOPS, 69.68 MiB/s [2024-11-05T11:33:12.918Z] 1039.75 IOPS, 69.05 MiB/s 00:17:13.644 Latency(us) 00:17:13.644 [2024-11-05T11:33:12.918Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:13.644 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:13.644 ftl0 : 4.00 1039.43 69.02 0.00 0.00 1004.05 220.55 3276.80 00:17:13.644 [2024-11-05T11:33:12.918Z] =================================================================================================================== 00:17:13.644 [2024-11-05T11:33:12.918Z] Total : 1039.43 69.02 0.00 0.00 1004.05 220.55 3276.80 00:17:13.644 { 00:17:13.644 "results": [ 00:17:13.644 { 00:17:13.644 "job": "ftl0", 00:17:13.644 "core_mask": "0x1", 00:17:13.644 "workload": "randwrite", 00:17:13.644 "status": "finished", 00:17:13.644 "queue_depth": 1, 00:17:13.644 "io_size": 69632, 00:17:13.644 "runtime": 4.002212, 00:17:13.644 "iops": 1039.4251978655802, 00:17:13.644 "mibps": 69.02432954576119, 00:17:13.644 "io_failed": 0, 00:17:13.644 "io_timeout": 0, 00:17:13.644 "avg_latency_us": 1004.054721893491, 00:17:13.644 "min_latency_us": 220.55384615384617, 00:17:13.644 "max_latency_us": 3276.8 00:17:13.644 } 00:17:13.644 ], 00:17:13.644 "core_count": 1 00:17:13.644 } 00:17:13.644 [2024-11-05 11:33:12.683516] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:13.644 11:33:12 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:13.644 [2024-11-05 11:33:12.782606] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:13.644 Running I/O for 4 seconds... 00:17:15.602 5716.00 IOPS, 22.33 MiB/s [2024-11-05T11:33:15.821Z] 5002.00 IOPS, 19.54 MiB/s [2024-11-05T11:33:17.210Z] 4823.67 IOPS, 18.84 MiB/s [2024-11-05T11:33:17.210Z] 4713.25 IOPS, 18.41 MiB/s 00:17:17.936 Latency(us) 00:17:17.936 [2024-11-05T11:33:17.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.936 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:17.936 ftl0 : 4.04 4699.98 18.36 0.00 0.00 27116.25 441.11 52025.50 00:17:17.936 [2024-11-05T11:33:17.210Z] =================================================================================================================== 00:17:17.936 [2024-11-05T11:33:17.210Z] Total : 4699.98 18.36 0.00 0.00 27116.25 0.00 52025.50 00:17:17.936 [2024-11-05 11:33:16.828723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:17.936 "results": [ 00:17:17.936 { 00:17:17.936 "job": "ftl0", 00:17:17.936 "core_mask": "0x1", 00:17:17.936 "workload": "randwrite", 00:17:17.936 "status": "finished", 00:17:17.936 "queue_depth": 128, 00:17:17.936 "io_size": 4096, 00:17:17.936 "runtime": 4.036403, 00:17:17.936 "iops": 4699.97668716429, 00:17:17.936 "mibps": 18.359283934235506, 00:17:17.936 "io_failed": 0, 00:17:17.936 "io_timeout": 0, 00:17:17.936 "avg_latency_us": 27116.252012829296, 00:17:17.936 "min_latency_us": 441.10769230769233, 00:17:17.936 "max_latency_us": 52025.50153846154 00:17:17.936 } 00:17:17.936 ], 00:17:17.936 "core_count": 1 00:17:17.936 } 00:17:17.936 l0 00:17:17.936 11:33:16 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:17.936 [2024-11-05 11:33:16.941188] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:17.936 Running I/O for 4 seconds... 00:17:19.826 4105.00 IOPS, 16.04 MiB/s [2024-11-05T11:33:20.048Z] 4099.00 IOPS, 16.01 MiB/s [2024-11-05T11:33:20.990Z] 4104.33 IOPS, 16.03 MiB/s [2024-11-05T11:33:20.990Z] 4109.50 IOPS, 16.05 MiB/s 00:17:21.716 Latency(us) 00:17:21.716 [2024-11-05T11:33:20.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.716 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:21.716 Verification LBA range: start 0x0 length 0x1400000 00:17:21.716 ftl0 : 4.01 4124.30 16.11 0.00 0.00 30945.40 373.37 44161.18 00:17:21.716 [2024-11-05T11:33:20.990Z] =================================================================================================================== 00:17:21.716 [2024-11-05T11:33:20.990Z] Total : 4124.30 16.11 0.00 0.00 30945.40 0.00 44161.18 00:17:21.716 [2024-11-05 11:33:20.973244] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:17:21.716 "results": [ 00:17:21.716 { 00:17:21.716 "job": "ftl0", 00:17:21.716 "core_mask": "0x1", 00:17:21.716 "workload": "verify", 00:17:21.716 "status": "finished", 00:17:21.716 "verify_range": { 00:17:21.716 "start": 0, 00:17:21.716 "length": 20971520 00:17:21.716 }, 00:17:21.716 "queue_depth": 128, 00:17:21.716 "io_size": 4096, 00:17:21.716 "runtime": 4.014744, 00:17:21.716 "iops": 4124.297838168511, 00:17:21.716 "mibps": 16.110538430345745, 00:17:21.716 "io_failed": 0, 00:17:21.716 "io_timeout": 0, 00:17:21.716 "avg_latency_us": 30945.40468544139, 00:17:21.716 "min_latency_us": 373.36615384615385, 00:17:21.716 "max_latency_us": 44161.18153846154 00:17:21.716 } 00:17:21.716 ], 00:17:21.716 "core_count": 1 00:17:21.716 } 00:17:21.716 l0 00:17:21.716 11:33:20 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:21.977 [2024-11-05 11:33:21.192112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.977 [2024-11-05 11:33:21.192180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.977 [2024-11-05 11:33:21.192196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.977 [2024-11-05 11:33:21.192210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.977 [2024-11-05 11:33:21.192233] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:21.977 [2024-11-05 11:33:21.195347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.977 [2024-11-05 11:33:21.195552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.977 [2024-11-05 11:33:21.195582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.090 ms 00:17:21.977 [2024-11-05 11:33:21.195591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.977 [2024-11-05 11:33:21.198672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.977 [2024-11-05 11:33:21.198864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.977 [2024-11-05 11:33:21.198891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.034 ms 00:17:21.977 [2024-11-05 11:33:21.198900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.238 [2024-11-05 11:33:21.452562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.238 [2024-11-05 11:33:21.452633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:22.239 [2024-11-05 11:33:21.452657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 253.628 ms 00:17:22.239 [2024-11-05 11:33:21.452666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.239 [2024-11-05 11:33:21.459151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.239 [2024-11-05 11:33:21.459200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:22.239 [2024-11-05 11:33:21.459216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.431 ms 00:17:22.239 [2024-11-05 11:33:21.459226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.239 [2024-11-05 11:33:21.486391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.239 [2024-11-05 11:33:21.486604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:22.239 [2024-11-05 11:33:21.486635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.097 ms 00:17:22.239 [2024-11-05 11:33:21.486644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.239 [2024-11-05 11:33:21.504110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.239 [2024-11-05 11:33:21.504162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:22.239 [2024-11-05 11:33:21.504184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.378 ms 00:17:22.239 [2024-11-05 11:33:21.504197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.239 [2024-11-05 11:33:21.504360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.239 [2024-11-05 11:33:21.504372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:22.239 [2024-11-05 11:33:21.504387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:17:22.239 [2024-11-05 11:33:21.504396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.502 [2024-11-05 11:33:21.530245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.502 [2024-11-05 11:33:21.530295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:22.502 [2024-11-05 11:33:21.530310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.827 ms 00:17:22.502 [2024-11-05 11:33:21.530317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.502 [2024-11-05 11:33:21.555640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.502 [2024-11-05 11:33:21.555685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:22.502 [2024-11-05 11:33:21.555700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.270 ms 00:17:22.502 [2024-11-05 11:33:21.555708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.502 [2024-11-05 11:33:21.580819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.502 [2024-11-05 11:33:21.580866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:22.502 [2024-11-05 11:33:21.580882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.057 ms 00:17:22.502 [2024-11-05 11:33:21.580890] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.502 [2024-11-05 11:33:21.605886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.502 [2024-11-05 11:33:21.605934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:22.502 [2024-11-05 11:33:21.605952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.886 ms 00:17:22.502 [2024-11-05 11:33:21.605959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.502 [2024-11-05 11:33:21.606007] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:22.502 [2024-11-05 11:33:21.606023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:22.502 [2024-11-05 11:33:21.606631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.606998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.607008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:22.503 [2024-11-05 11:33:21.607024] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:22.503 [2024-11-05 11:33:21.607035] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 159893d6-ef4e-4e3b-9384-701979427b8a 00:17:22.503 [2024-11-05 11:33:21.607044] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:22.503 [2024-11-05 11:33:21.607053] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:22.503 [2024-11-05 11:33:21.607061] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:22.503 [2024-11-05 11:33:21.607071] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:22.503 [2024-11-05 11:33:21.607081] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:22.503 [2024-11-05 11:33:21.607091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:22.503 [2024-11-05 11:33:21.607099] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:22.503 [2024-11-05 11:33:21.607110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:22.503 [2024-11-05 11:33:21.607117] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:22.503 [2024-11-05 11:33:21.607127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.503 [2024-11-05 11:33:21.607134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:22.503 [2024-11-05 11:33:21.607145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.121 ms 00:17:22.503 [2024-11-05 11:33:21.607152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.620776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.503 [2024-11-05 11:33:21.620848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:22.503 [2024-11-05 11:33:21.620867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.580 ms 00:17:22.503 [2024-11-05 11:33:21.620875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.621277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:22.503 [2024-11-05 11:33:21.621293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:22.503 [2024-11-05 11:33:21.621305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:17:22.503 [2024-11-05 11:33:21.621313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.660521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.503 [2024-11-05 11:33:21.660574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:22.503 [2024-11-05 11:33:21.660592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.503 [2024-11-05 11:33:21.660601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.660672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.503 [2024-11-05 11:33:21.660680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:22.503 [2024-11-05 11:33:21.660691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.503 [2024-11-05 11:33:21.660698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.660786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.503 [2024-11-05 11:33:21.660797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:22.503 [2024-11-05 11:33:21.660836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.503 [2024-11-05 11:33:21.660844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.660862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.503 [2024-11-05 11:33:21.660870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:22.503 [2024-11-05 11:33:21.660881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.503 [2024-11-05 11:33:21.660888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.503 [2024-11-05 11:33:21.747336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.503 [2024-11-05 11:33:21.747403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:22.503 [2024-11-05 11:33:21.747425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.503 [2024-11-05 11:33:21.747434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.818362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.818656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:22.765 [2024-11-05 11:33:21.818684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.818693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.818848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.818862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:22.765 [2024-11-05 11:33:21.818874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.818885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.818935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.818945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:22.765 [2024-11-05 11:33:21.818955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.818963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.819073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.819084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:22.765 [2024-11-05 11:33:21.819098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.819107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.819145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.819154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:22.765 [2024-11-05 11:33:21.819164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.819172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.819214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.819224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:22.765 [2024-11-05 11:33:21.819234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.819243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.819294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:22.765 [2024-11-05 11:33:21.819313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:22.765 [2024-11-05 11:33:21.819324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:22.765 [2024-11-05 11:33:21.819332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:22.765 [2024-11-05 11:33:21.819472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 627.313 ms, result 0 00:17:22.765 true 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73105 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 73105 ']' 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 73105 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73105 00:17:22.765 killing process with pid 73105 00:17:22.765 Received shutdown signal, test time was about 4.000000 seconds 00:17:22.765 00:17:22.765 Latency(us) 00:17:22.765 [2024-11-05T11:33:22.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.765 [2024-11-05T11:33:22.039Z] =================================================================================================================== 00:17:22.765 [2024-11-05T11:33:22.039Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73105' 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 73105 00:17:22.765 11:33:21 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 73105 00:17:24.692 Remove shared memory files 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:24.692 ************************************ 00:17:24.692 END TEST ftl_bdevperf 00:17:24.692 ************************************ 00:17:24.692 00:17:24.692 real 0m23.760s 00:17:24.692 user 0m26.395s 00:17:24.692 sys 0m0.971s 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:24.692 11:33:23 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:24.692 11:33:23 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:24.692 11:33:23 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:17:24.692 11:33:23 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:24.692 11:33:23 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:24.692 ************************************ 00:17:24.692 START TEST ftl_trim 00:17:24.692 ************************************ 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:24.692 * Looking for test storage... 00:17:24.692 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.692 11:33:23 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.692 --rc genhtml_branch_coverage=1 00:17:24.692 --rc genhtml_function_coverage=1 00:17:24.692 --rc genhtml_legend=1 00:17:24.692 --rc geninfo_all_blocks=1 00:17:24.692 --rc geninfo_unexecuted_blocks=1 00:17:24.692 00:17:24.692 ' 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.692 --rc genhtml_branch_coverage=1 00:17:24.692 --rc genhtml_function_coverage=1 00:17:24.692 --rc genhtml_legend=1 00:17:24.692 --rc geninfo_all_blocks=1 00:17:24.692 --rc geninfo_unexecuted_blocks=1 00:17:24.692 00:17:24.692 ' 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.692 --rc genhtml_branch_coverage=1 00:17:24.692 --rc genhtml_function_coverage=1 00:17:24.692 --rc genhtml_legend=1 00:17:24.692 --rc geninfo_all_blocks=1 00:17:24.692 --rc geninfo_unexecuted_blocks=1 00:17:24.692 00:17:24.692 ' 00:17:24.692 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:24.692 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.692 --rc genhtml_branch_coverage=1 00:17:24.692 --rc genhtml_function_coverage=1 00:17:24.692 --rc genhtml_legend=1 00:17:24.692 --rc geninfo_all_blocks=1 00:17:24.692 --rc geninfo_unexecuted_blocks=1 00:17:24.692 00:17:24.692 ' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.692 11:33:23 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=73465 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:24.693 11:33:23 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 73465 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73465 ']' 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:24.693 11:33:23 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:24.953 [2024-11-05 11:33:23.993136] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:24.953 [2024-11-05 11:33:23.993515] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73465 ] 00:17:24.953 [2024-11-05 11:33:24.157191] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:25.214 [2024-11-05 11:33:24.284736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.214 [2024-11-05 11:33:24.285063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:25.214 [2024-11-05 11:33:24.285145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.788 11:33:24 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:25.788 11:33:24 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:25.788 11:33:24 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:26.049 11:33:25 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:26.049 11:33:25 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:26.049 11:33:25 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:26.049 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:26.049 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:26.049 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:26.049 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:26.049 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:26.311 { 00:17:26.311 "name": "nvme0n1", 00:17:26.311 "aliases": [ 00:17:26.311 "622e3237-c04b-42e4-9976-c23a6ef46af4" 00:17:26.311 ], 00:17:26.311 "product_name": "NVMe disk", 00:17:26.311 "block_size": 4096, 00:17:26.311 "num_blocks": 1310720, 00:17:26.311 "uuid": "622e3237-c04b-42e4-9976-c23a6ef46af4", 00:17:26.311 "numa_id": -1, 00:17:26.311 "assigned_rate_limits": { 00:17:26.311 "rw_ios_per_sec": 0, 00:17:26.311 "rw_mbytes_per_sec": 0, 00:17:26.311 "r_mbytes_per_sec": 0, 00:17:26.311 "w_mbytes_per_sec": 0 00:17:26.311 }, 00:17:26.311 "claimed": true, 00:17:26.311 "claim_type": "read_many_write_one", 00:17:26.311 "zoned": false, 00:17:26.311 "supported_io_types": { 00:17:26.311 "read": true, 00:17:26.311 "write": true, 00:17:26.311 "unmap": true, 00:17:26.311 "flush": true, 00:17:26.311 "reset": true, 00:17:26.311 "nvme_admin": true, 00:17:26.311 "nvme_io": true, 00:17:26.311 "nvme_io_md": false, 00:17:26.311 "write_zeroes": true, 00:17:26.311 "zcopy": false, 00:17:26.311 "get_zone_info": false, 00:17:26.311 "zone_management": false, 00:17:26.311 "zone_append": false, 00:17:26.311 "compare": true, 00:17:26.311 "compare_and_write": false, 00:17:26.311 "abort": true, 00:17:26.311 "seek_hole": false, 00:17:26.311 "seek_data": false, 00:17:26.311 "copy": true, 00:17:26.311 "nvme_iov_md": false 00:17:26.311 }, 00:17:26.311 "driver_specific": { 00:17:26.311 "nvme": [ 00:17:26.311 { 00:17:26.311 "pci_address": "0000:00:11.0", 00:17:26.311 "trid": { 00:17:26.311 "trtype": "PCIe", 00:17:26.311 "traddr": "0000:00:11.0" 00:17:26.311 }, 00:17:26.311 "ctrlr_data": { 00:17:26.311 "cntlid": 0, 00:17:26.311 "vendor_id": "0x1b36", 00:17:26.311 "model_number": "QEMU NVMe Ctrl", 00:17:26.311 "serial_number": "12341", 00:17:26.311 "firmware_revision": "8.0.0", 00:17:26.311 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:26.311 "oacs": { 00:17:26.311 "security": 0, 00:17:26.311 "format": 1, 00:17:26.311 "firmware": 0, 00:17:26.311 "ns_manage": 1 00:17:26.311 }, 00:17:26.311 "multi_ctrlr": false, 00:17:26.311 "ana_reporting": false 00:17:26.311 }, 00:17:26.311 "vs": { 00:17:26.311 "nvme_version": "1.4" 00:17:26.311 }, 00:17:26.311 "ns_data": { 00:17:26.311 "id": 1, 00:17:26.311 "can_share": false 00:17:26.311 } 00:17:26.311 } 00:17:26.311 ], 00:17:26.311 "mp_policy": "active_passive" 00:17:26.311 } 00:17:26.311 } 00:17:26.311 ]' 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:26.311 11:33:25 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:17:26.311 11:33:25 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:26.311 11:33:25 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:26.311 11:33:25 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:26.311 11:33:25 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:26.311 11:33:25 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:26.572 11:33:25 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=42306015-8e58-4cac-8e17-327ad21a969c 00:17:26.572 11:33:25 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:26.572 11:33:25 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42306015-8e58-4cac-8e17-327ad21a969c 00:17:26.832 11:33:26 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:27.093 11:33:26 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=48101ea4-76df-4266-9744-b003667d64fc 00:17:27.093 11:33:26 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 48101ea4-76df-4266-9744-b003667d64fc 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:27.353 11:33:26 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.353 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.353 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:27.353 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:27.353 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:27.354 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:27.614 { 00:17:27.614 "name": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:27.614 "aliases": [ 00:17:27.614 "lvs/nvme0n1p0" 00:17:27.614 ], 00:17:27.614 "product_name": "Logical Volume", 00:17:27.614 "block_size": 4096, 00:17:27.614 "num_blocks": 26476544, 00:17:27.614 "uuid": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:27.614 "assigned_rate_limits": { 00:17:27.614 "rw_ios_per_sec": 0, 00:17:27.614 "rw_mbytes_per_sec": 0, 00:17:27.614 "r_mbytes_per_sec": 0, 00:17:27.614 "w_mbytes_per_sec": 0 00:17:27.614 }, 00:17:27.614 "claimed": false, 00:17:27.614 "zoned": false, 00:17:27.614 "supported_io_types": { 00:17:27.614 "read": true, 00:17:27.614 "write": true, 00:17:27.614 "unmap": true, 00:17:27.614 "flush": false, 00:17:27.614 "reset": true, 00:17:27.614 "nvme_admin": false, 00:17:27.614 "nvme_io": false, 00:17:27.614 "nvme_io_md": false, 00:17:27.614 "write_zeroes": true, 00:17:27.614 "zcopy": false, 00:17:27.614 "get_zone_info": false, 00:17:27.614 "zone_management": false, 00:17:27.614 "zone_append": false, 00:17:27.614 "compare": false, 00:17:27.614 "compare_and_write": false, 00:17:27.614 "abort": false, 00:17:27.614 "seek_hole": true, 00:17:27.614 "seek_data": true, 00:17:27.614 "copy": false, 00:17:27.614 "nvme_iov_md": false 00:17:27.614 }, 00:17:27.614 "driver_specific": { 00:17:27.614 "lvol": { 00:17:27.614 "lvol_store_uuid": "48101ea4-76df-4266-9744-b003667d64fc", 00:17:27.614 "base_bdev": "nvme0n1", 00:17:27.614 "thin_provision": true, 00:17:27.614 "num_allocated_clusters": 0, 00:17:27.614 "snapshot": false, 00:17:27.614 "clone": false, 00:17:27.614 "esnap_clone": false 00:17:27.614 } 00:17:27.614 } 00:17:27.614 } 00:17:27.614 ]' 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:27.614 11:33:26 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:27.614 11:33:26 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:27.614 11:33:26 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:27.614 11:33:26 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:27.874 11:33:26 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:27.874 11:33:26 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:27.874 11:33:27 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.874 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:27.874 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:27.874 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:27.874 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:27.874 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:28.135 { 00:17:28.135 "name": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:28.135 "aliases": [ 00:17:28.135 "lvs/nvme0n1p0" 00:17:28.135 ], 00:17:28.135 "product_name": "Logical Volume", 00:17:28.135 "block_size": 4096, 00:17:28.135 "num_blocks": 26476544, 00:17:28.135 "uuid": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:28.135 "assigned_rate_limits": { 00:17:28.135 "rw_ios_per_sec": 0, 00:17:28.135 "rw_mbytes_per_sec": 0, 00:17:28.135 "r_mbytes_per_sec": 0, 00:17:28.135 "w_mbytes_per_sec": 0 00:17:28.135 }, 00:17:28.135 "claimed": false, 00:17:28.135 "zoned": false, 00:17:28.135 "supported_io_types": { 00:17:28.135 "read": true, 00:17:28.135 "write": true, 00:17:28.135 "unmap": true, 00:17:28.135 "flush": false, 00:17:28.135 "reset": true, 00:17:28.135 "nvme_admin": false, 00:17:28.135 "nvme_io": false, 00:17:28.135 "nvme_io_md": false, 00:17:28.135 "write_zeroes": true, 00:17:28.135 "zcopy": false, 00:17:28.135 "get_zone_info": false, 00:17:28.135 "zone_management": false, 00:17:28.135 "zone_append": false, 00:17:28.135 "compare": false, 00:17:28.135 "compare_and_write": false, 00:17:28.135 "abort": false, 00:17:28.135 "seek_hole": true, 00:17:28.135 "seek_data": true, 00:17:28.135 "copy": false, 00:17:28.135 "nvme_iov_md": false 00:17:28.135 }, 00:17:28.135 "driver_specific": { 00:17:28.135 "lvol": { 00:17:28.135 "lvol_store_uuid": "48101ea4-76df-4266-9744-b003667d64fc", 00:17:28.135 "base_bdev": "nvme0n1", 00:17:28.135 "thin_provision": true, 00:17:28.135 "num_allocated_clusters": 0, 00:17:28.135 "snapshot": false, 00:17:28.135 "clone": false, 00:17:28.135 "esnap_clone": false 00:17:28.135 } 00:17:28.135 } 00:17:28.135 } 00:17:28.135 ]' 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:28.135 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:28.135 11:33:27 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:28.135 11:33:27 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:28.397 11:33:27 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:28.397 11:33:27 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:28.397 11:33:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:28.397 { 00:17:28.397 "name": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:28.397 "aliases": [ 00:17:28.397 "lvs/nvme0n1p0" 00:17:28.397 ], 00:17:28.397 "product_name": "Logical Volume", 00:17:28.397 "block_size": 4096, 00:17:28.397 "num_blocks": 26476544, 00:17:28.397 "uuid": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:28.397 "assigned_rate_limits": { 00:17:28.397 "rw_ios_per_sec": 0, 00:17:28.397 "rw_mbytes_per_sec": 0, 00:17:28.397 "r_mbytes_per_sec": 0, 00:17:28.397 "w_mbytes_per_sec": 0 00:17:28.397 }, 00:17:28.397 "claimed": false, 00:17:28.397 "zoned": false, 00:17:28.397 "supported_io_types": { 00:17:28.397 "read": true, 00:17:28.397 "write": true, 00:17:28.397 "unmap": true, 00:17:28.397 "flush": false, 00:17:28.397 "reset": true, 00:17:28.397 "nvme_admin": false, 00:17:28.397 "nvme_io": false, 00:17:28.397 "nvme_io_md": false, 00:17:28.397 "write_zeroes": true, 00:17:28.397 "zcopy": false, 00:17:28.397 "get_zone_info": false, 00:17:28.397 "zone_management": false, 00:17:28.397 "zone_append": false, 00:17:28.397 "compare": false, 00:17:28.397 "compare_and_write": false, 00:17:28.397 "abort": false, 00:17:28.397 "seek_hole": true, 00:17:28.397 "seek_data": true, 00:17:28.397 "copy": false, 00:17:28.397 "nvme_iov_md": false 00:17:28.397 }, 00:17:28.397 "driver_specific": { 00:17:28.397 "lvol": { 00:17:28.397 "lvol_store_uuid": "48101ea4-76df-4266-9744-b003667d64fc", 00:17:28.397 "base_bdev": "nvme0n1", 00:17:28.397 "thin_provision": true, 00:17:28.397 "num_allocated_clusters": 0, 00:17:28.397 "snapshot": false, 00:17:28.397 "clone": false, 00:17:28.397 "esnap_clone": false 00:17:28.397 } 00:17:28.397 } 00:17:28.397 } 00:17:28.397 ]' 00:17:28.397 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:28.658 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:17:28.658 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:28.658 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:28.658 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:28.658 11:33:27 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:17:28.658 11:33:27 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:28.658 11:33:27 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 6d66e1fd-6aa3-45ae-8801-b2bb3921b004 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:28.658 [2024-11-05 11:33:27.907665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-11-05 11:33:27.907705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.658 [2024-11-05 11:33:27.907717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.658 [2024-11-05 11:33:27.907725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.658 [2024-11-05 11:33:27.909977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-11-05 11:33:27.910005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.658 [2024-11-05 11:33:27.910016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:17:28.658 [2024-11-05 11:33:27.910022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.658 [2024-11-05 11:33:27.910095] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.658 [2024-11-05 11:33:27.910683] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.658 [2024-11-05 11:33:27.910706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.658 [2024-11-05 11:33:27.910721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.658 [2024-11-05 11:33:27.910729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:17:28.658 [2024-11-05 11:33:27.910735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.910833] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:17:28.659 [2024-11-05 11:33:27.911757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.911786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:28.659 [2024-11-05 11:33:27.911793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:28.659 [2024-11-05 11:33:27.911808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.916551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.916577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.659 [2024-11-05 11:33:27.916585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.686 ms 00:17:28.659 [2024-11-05 11:33:27.916592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.916682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.916692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.659 [2024-11-05 11:33:27.916698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:28.659 [2024-11-05 11:33:27.916707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.916733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.916741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.659 [2024-11-05 11:33:27.916747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.659 [2024-11-05 11:33:27.916753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.916779] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.659 [2024-11-05 11:33:27.919665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.919689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.659 [2024-11-05 11:33:27.919698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.888 ms 00:17:28.659 [2024-11-05 11:33:27.919703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.919744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.919751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.659 [2024-11-05 11:33:27.919759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:28.659 [2024-11-05 11:33:27.919775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.919798] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:28.659 [2024-11-05 11:33:27.919915] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.659 [2024-11-05 11:33:27.919930] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.659 [2024-11-05 11:33:27.919939] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.659 [2024-11-05 11:33:27.919948] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.659 [2024-11-05 11:33:27.919954] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.659 [2024-11-05 11:33:27.919962] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.659 [2024-11-05 11:33:27.919968] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.659 [2024-11-05 11:33:27.919975] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.659 [2024-11-05 11:33:27.919980] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.659 [2024-11-05 11:33:27.919987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.919994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.659 [2024-11-05 11:33:27.920001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.190 ms 00:17:28.659 [2024-11-05 11:33:27.920007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.920080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.659 [2024-11-05 11:33:27.920086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.659 [2024-11-05 11:33:27.920094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:28.659 [2024-11-05 11:33:27.920100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.659 [2024-11-05 11:33:27.920188] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.659 [2024-11-05 11:33:27.920228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.659 [2024-11-05 11:33:27.920239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.659 [2024-11-05 11:33:27.920258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920265] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.659 [2024-11-05 11:33:27.920277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.659 [2024-11-05 11:33:27.920288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.659 [2024-11-05 11:33:27.920293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.659 [2024-11-05 11:33:27.920299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.659 [2024-11-05 11:33:27.920304] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.659 [2024-11-05 11:33:27.920310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.659 [2024-11-05 11:33:27.920316] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.659 [2024-11-05 11:33:27.920328] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.659 [2024-11-05 11:33:27.920347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.659 [2024-11-05 11:33:27.920363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920370] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.659 [2024-11-05 11:33:27.920381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920387] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.659 [2024-11-05 11:33:27.920399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.659 [2024-11-05 11:33:27.920418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.659 [2024-11-05 11:33:27.920430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.659 [2024-11-05 11:33:27.920435] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.659 [2024-11-05 11:33:27.920441] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.659 [2024-11-05 11:33:27.920446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.659 [2024-11-05 11:33:27.920452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.659 [2024-11-05 11:33:27.920457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.659 [2024-11-05 11:33:27.920469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.659 [2024-11-05 11:33:27.920475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920480] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.659 [2024-11-05 11:33:27.920486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.659 [2024-11-05 11:33:27.920492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.659 [2024-11-05 11:33:27.920504] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.659 [2024-11-05 11:33:27.920513] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.659 [2024-11-05 11:33:27.920518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.659 [2024-11-05 11:33:27.920524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.659 [2024-11-05 11:33:27.920529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.659 [2024-11-05 11:33:27.920535] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.659 [2024-11-05 11:33:27.920543] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.659 [2024-11-05 11:33:27.920552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.659 [2024-11-05 11:33:27.920558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.659 [2024-11-05 11:33:27.920565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.659 [2024-11-05 11:33:27.920570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.659 [2024-11-05 11:33:27.920577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.659 [2024-11-05 11:33:27.920584] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.659 [2024-11-05 11:33:27.920590] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.660 [2024-11-05 11:33:27.920596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.660 [2024-11-05 11:33:27.920602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.660 [2024-11-05 11:33:27.920608] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.660 [2024-11-05 11:33:27.920616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920634] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.660 [2024-11-05 11:33:27.920647] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.660 [2024-11-05 11:33:27.920655] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920661] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.660 [2024-11-05 11:33:27.920668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.660 [2024-11-05 11:33:27.920674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.660 [2024-11-05 11:33:27.920681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.660 [2024-11-05 11:33:27.920687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.660 [2024-11-05 11:33:27.920697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.660 [2024-11-05 11:33:27.920702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:17:28.660 [2024-11-05 11:33:27.920709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.660 [2024-11-05 11:33:27.920776] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:28.660 [2024-11-05 11:33:27.920786] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:31.214 [2024-11-05 11:33:30.318879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.319084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:31.214 [2024-11-05 11:33:30.319108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2398.091 ms 00:17:31.214 [2024-11-05 11:33:30.319119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.344094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.344135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:31.214 [2024-11-05 11:33:30.344147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.739 ms 00:17:31.214 [2024-11-05 11:33:30.344156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.344291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.344303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:31.214 [2024-11-05 11:33:30.344312] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:31.214 [2024-11-05 11:33:30.344323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.382598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.382640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:31.214 [2024-11-05 11:33:30.382654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.229 ms 00:17:31.214 [2024-11-05 11:33:30.382665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.382740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.382753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:31.214 [2024-11-05 11:33:30.382762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:31.214 [2024-11-05 11:33:30.382771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.383106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.383124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:31.214 [2024-11-05 11:33:30.383133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:17:31.214 [2024-11-05 11:33:30.383141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.383251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.383261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:31.214 [2024-11-05 11:33:30.383269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:31.214 [2024-11-05 11:33:30.383279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.214 [2024-11-05 11:33:30.399012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.214 [2024-11-05 11:33:30.399042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:31.214 [2024-11-05 11:33:30.399052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.690 ms 00:17:31.214 [2024-11-05 11:33:30.399061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.215 [2024-11-05 11:33:30.410260] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:31.215 [2024-11-05 11:33:30.424133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.215 [2024-11-05 11:33:30.424164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:31.215 [2024-11-05 11:33:30.424177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.978 ms 00:17:31.215 [2024-11-05 11:33:30.424187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.495919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.495965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:31.477 [2024-11-05 11:33:30.495981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.666 ms 00:17:31.477 [2024-11-05 11:33:30.495992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.496199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.496211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:31.477 [2024-11-05 11:33:30.496223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.139 ms 00:17:31.477 [2024-11-05 11:33:30.496231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.519638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.519672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:31.477 [2024-11-05 11:33:30.519688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.378 ms 00:17:31.477 [2024-11-05 11:33:30.519695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.542265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.542297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:31.477 [2024-11-05 11:33:30.542310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.513 ms 00:17:31.477 [2024-11-05 11:33:30.542317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.542903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.542920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:31.477 [2024-11-05 11:33:30.542935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:17:31.477 [2024-11-05 11:33:30.542947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.612683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.612719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:31.477 [2024-11-05 11:33:30.612736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.696 ms 00:17:31.477 [2024-11-05 11:33:30.612746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.636694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.636728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:31.477 [2024-11-05 11:33:30.636741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.835 ms 00:17:31.477 [2024-11-05 11:33:30.636749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.659327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.659359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:31.477 [2024-11-05 11:33:30.659371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.506 ms 00:17:31.477 [2024-11-05 11:33:30.659378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.683002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.683034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:31.477 [2024-11-05 11:33:30.683047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.557 ms 00:17:31.477 [2024-11-05 11:33:30.683066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.683125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.683135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:31.477 [2024-11-05 11:33:30.683147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:31.477 [2024-11-05 11:33:30.683156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.683227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:31.477 [2024-11-05 11:33:30.683236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:31.477 [2024-11-05 11:33:30.683245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:31.477 [2024-11-05 11:33:30.683252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:31.477 [2024-11-05 11:33:30.683993] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:31.477 [2024-11-05 11:33:30.687057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2776.029 ms, result 0 00:17:31.477 [2024-11-05 11:33:30.687905] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ap{ 00:17:31.477 "name": "ftl0", 00:17:31.477 "uuid": "9a61a43b-8840-4edd-a0ff-ca2f1deb6908" 00:17:31.477 } 00:17:31.477 p_thread 00:17:31.477 11:33:30 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:31.477 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:31.738 11:33:30 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:32.000 [ 00:17:32.000 { 00:17:32.000 "name": "ftl0", 00:17:32.000 "aliases": [ 00:17:32.000 "9a61a43b-8840-4edd-a0ff-ca2f1deb6908" 00:17:32.000 ], 00:17:32.000 "product_name": "FTL disk", 00:17:32.000 "block_size": 4096, 00:17:32.000 "num_blocks": 23592960, 00:17:32.000 "uuid": "9a61a43b-8840-4edd-a0ff-ca2f1deb6908", 00:17:32.000 "assigned_rate_limits": { 00:17:32.000 "rw_ios_per_sec": 0, 00:17:32.000 "rw_mbytes_per_sec": 0, 00:17:32.000 "r_mbytes_per_sec": 0, 00:17:32.000 "w_mbytes_per_sec": 0 00:17:32.000 }, 00:17:32.000 "claimed": false, 00:17:32.000 "zoned": false, 00:17:32.000 "supported_io_types": { 00:17:32.000 "read": true, 00:17:32.000 "write": true, 00:17:32.000 "unmap": true, 00:17:32.000 "flush": true, 00:17:32.000 "reset": false, 00:17:32.000 "nvme_admin": false, 00:17:32.000 "nvme_io": false, 00:17:32.000 "nvme_io_md": false, 00:17:32.000 "write_zeroes": true, 00:17:32.000 "zcopy": false, 00:17:32.000 "get_zone_info": false, 00:17:32.000 "zone_management": false, 00:17:32.000 "zone_append": false, 00:17:32.000 "compare": false, 00:17:32.000 "compare_and_write": false, 00:17:32.000 "abort": false, 00:17:32.000 "seek_hole": false, 00:17:32.000 "seek_data": false, 00:17:32.000 "copy": false, 00:17:32.000 "nvme_iov_md": false 00:17:32.000 }, 00:17:32.000 "driver_specific": { 00:17:32.000 "ftl": { 00:17:32.000 "base_bdev": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:32.000 "cache": "nvc0n1p0" 00:17:32.000 } 00:17:32.000 } 00:17:32.000 } 00:17:32.000 ] 00:17:32.000 11:33:31 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:17:32.000 11:33:31 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:32.000 11:33:31 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:32.261 11:33:31 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:32.261 11:33:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:32.546 11:33:31 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:32.546 { 00:17:32.546 "name": "ftl0", 00:17:32.546 "aliases": [ 00:17:32.546 "9a61a43b-8840-4edd-a0ff-ca2f1deb6908" 00:17:32.546 ], 00:17:32.546 "product_name": "FTL disk", 00:17:32.546 "block_size": 4096, 00:17:32.546 "num_blocks": 23592960, 00:17:32.546 "uuid": "9a61a43b-8840-4edd-a0ff-ca2f1deb6908", 00:17:32.546 "assigned_rate_limits": { 00:17:32.546 "rw_ios_per_sec": 0, 00:17:32.546 "rw_mbytes_per_sec": 0, 00:17:32.546 "r_mbytes_per_sec": 0, 00:17:32.546 "w_mbytes_per_sec": 0 00:17:32.546 }, 00:17:32.546 "claimed": false, 00:17:32.546 "zoned": false, 00:17:32.546 "supported_io_types": { 00:17:32.546 "read": true, 00:17:32.546 "write": true, 00:17:32.546 "unmap": true, 00:17:32.546 "flush": true, 00:17:32.546 "reset": false, 00:17:32.546 "nvme_admin": false, 00:17:32.546 "nvme_io": false, 00:17:32.546 "nvme_io_md": false, 00:17:32.546 "write_zeroes": true, 00:17:32.546 "zcopy": false, 00:17:32.546 "get_zone_info": false, 00:17:32.546 "zone_management": false, 00:17:32.546 "zone_append": false, 00:17:32.546 "compare": false, 00:17:32.546 "compare_and_write": false, 00:17:32.546 "abort": false, 00:17:32.546 "seek_hole": false, 00:17:32.546 "seek_data": false, 00:17:32.546 "copy": false, 00:17:32.546 "nvme_iov_md": false 00:17:32.546 }, 00:17:32.546 "driver_specific": { 00:17:32.546 "ftl": { 00:17:32.546 "base_bdev": "6d66e1fd-6aa3-45ae-8801-b2bb3921b004", 00:17:32.546 "cache": "nvc0n1p0" 00:17:32.546 } 00:17:32.546 } 00:17:32.546 } 00:17:32.546 ]' 00:17:32.546 11:33:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:32.546 11:33:31 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:32.546 11:33:31 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:32.546 [2024-11-05 11:33:31.794046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.546 [2024-11-05 11:33:31.794211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:32.546 [2024-11-05 11:33:31.794231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:32.546 [2024-11-05 11:33:31.794241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.546 [2024-11-05 11:33:31.794282] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:32.546 [2024-11-05 11:33:31.796879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.546 [2024-11-05 11:33:31.796909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:32.546 [2024-11-05 11:33:31.796927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.579 ms 00:17:32.546 [2024-11-05 11:33:31.796935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.546 [2024-11-05 11:33:31.797390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.546 [2024-11-05 11:33:31.797404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:32.546 [2024-11-05 11:33:31.797414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:17:32.546 [2024-11-05 11:33:31.797421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.546 [2024-11-05 11:33:31.801060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.546 [2024-11-05 11:33:31.801172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:32.546 [2024-11-05 11:33:31.801189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.614 ms 00:17:32.546 [2024-11-05 11:33:31.801199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.546 [2024-11-05 11:33:31.808134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.546 [2024-11-05 11:33:31.808247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:32.546 [2024-11-05 11:33:31.808265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.888 ms 00:17:32.546 [2024-11-05 11:33:31.808273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.832031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.832062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:32.810 [2024-11-05 11:33:31.832077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.680 ms 00:17:32.810 [2024-11-05 11:33:31.832084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.846870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.846993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:32.810 [2024-11-05 11:33:31.847012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.727 ms 00:17:32.810 [2024-11-05 11:33:31.847021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.847209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.847222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:32.810 [2024-11-05 11:33:31.847232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:17:32.810 [2024-11-05 11:33:31.847239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.870023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.870131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:32.810 [2024-11-05 11:33:31.870148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.753 ms 00:17:32.810 [2024-11-05 11:33:31.870155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.892822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.892927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:32.810 [2024-11-05 11:33:31.892946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.613 ms 00:17:32.810 [2024-11-05 11:33:31.892953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.914606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.810 [2024-11-05 11:33:31.914641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:32.810 [2024-11-05 11:33:31.914654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.599 ms 00:17:32.810 [2024-11-05 11:33:31.914661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.810 [2024-11-05 11:33:31.936325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.811 [2024-11-05 11:33:31.936435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:32.811 [2024-11-05 11:33:31.936453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.564 ms 00:17:32.811 [2024-11-05 11:33:31.936461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.811 [2024-11-05 11:33:31.936509] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:32.811 [2024-11-05 11:33:31.936523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.936994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:32.811 [2024-11-05 11:33:31.937256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:32.812 [2024-11-05 11:33:31.937384] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:32.812 [2024-11-05 11:33:31.937395] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:17:32.812 [2024-11-05 11:33:31.937403] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:32.812 [2024-11-05 11:33:31.937411] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:32.812 [2024-11-05 11:33:31.937418] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:32.812 [2024-11-05 11:33:31.937426] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:32.812 [2024-11-05 11:33:31.937433] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:32.812 [2024-11-05 11:33:31.937442] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:32.812 [2024-11-05 11:33:31.937451] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:32.812 [2024-11-05 11:33:31.937459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:32.812 [2024-11-05 11:33:31.937465] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:32.812 [2024-11-05 11:33:31.937473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.812 [2024-11-05 11:33:31.937480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:32.812 [2024-11-05 11:33:31.937490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:17:32.812 [2024-11-05 11:33:31.937497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.949935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.812 [2024-11-05 11:33:31.949963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:32.812 [2024-11-05 11:33:31.949977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.409 ms 00:17:32.812 [2024-11-05 11:33:31.949985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.950356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:32.812 [2024-11-05 11:33:31.950366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:32.812 [2024-11-05 11:33:31.950375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:17:32.812 [2024-11-05 11:33:31.950382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.993694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.812 [2024-11-05 11:33:31.993728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:32.812 [2024-11-05 11:33:31.993740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.812 [2024-11-05 11:33:31.993750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.993870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.812 [2024-11-05 11:33:31.993881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:32.812 [2024-11-05 11:33:31.993890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.812 [2024-11-05 11:33:31.993897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.993953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.812 [2024-11-05 11:33:31.993962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:32.812 [2024-11-05 11:33:31.993972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.812 [2024-11-05 11:33:31.993980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:31.994009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.812 [2024-11-05 11:33:31.994017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:32.812 [2024-11-05 11:33:31.994026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.812 [2024-11-05 11:33:31.994033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:32.812 [2024-11-05 11:33:32.073048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:32.812 [2024-11-05 11:33:32.073089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:32.812 [2024-11-05 11:33:32.073101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:32.812 [2024-11-05 11:33:32.073109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.134550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.134587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:33.074 [2024-11-05 11:33:32.134599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.134606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.134685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.134694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:33.074 [2024-11-05 11:33:32.134718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.134725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.134777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.134787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:33.074 [2024-11-05 11:33:32.134796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.134825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.134930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.134940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:33.074 [2024-11-05 11:33:32.134950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.134957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.135003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.135012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:33.074 [2024-11-05 11:33:32.135024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.135031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.135072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.135080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:33.074 [2024-11-05 11:33:32.135094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.135100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.135155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:33.074 [2024-11-05 11:33:32.135166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:33.074 [2024-11-05 11:33:32.135174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:33.074 [2024-11-05 11:33:32.135181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:33.074 [2024-11-05 11:33:32.135344] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 341.280 ms, result 0 00:17:33.074 true 00:17:33.074 11:33:32 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 73465 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73465 ']' 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73465 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73465 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73465' 00:17:33.074 killing process with pid 73465 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73465 00:17:33.074 11:33:32 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73465 00:17:39.667 11:33:37 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:39.929 65536+0 records in 00:17:39.929 65536+0 records out 00:17:39.929 268435456 bytes (268 MB, 256 MiB) copied, 1.09566 s, 245 MB/s 00:17:39.929 11:33:39 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.929 [2024-11-05 11:33:39.163090] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:39.929 [2024-11-05 11:33:39.163243] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73647 ] 00:17:40.191 [2024-11-05 11:33:39.328389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:40.191 [2024-11-05 11:33:39.449542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:40.765 [2024-11-05 11:33:39.740331] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.765 [2024-11-05 11:33:39.740669] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:40.765 [2024-11-05 11:33:39.902341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.902410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:40.765 [2024-11-05 11:33:39.902424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.765 [2024-11-05 11:33:39.902433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.905700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.905763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:40.765 [2024-11-05 11:33:39.905776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.244 ms 00:17:40.765 [2024-11-05 11:33:39.905785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.905957] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:40.765 [2024-11-05 11:33:39.906761] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:40.765 [2024-11-05 11:33:39.906798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.906821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:40.765 [2024-11-05 11:33:39.906832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:17:40.765 [2024-11-05 11:33:39.906841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.908575] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.765 [2024-11-05 11:33:39.922943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.922993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.765 [2024-11-05 11:33:39.923013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.369 ms 00:17:40.765 [2024-11-05 11:33:39.923022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.923144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.923156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.765 [2024-11-05 11:33:39.923166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:40.765 [2024-11-05 11:33:39.923174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.931450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.931499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.765 [2024-11-05 11:33:39.931509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.230 ms 00:17:40.765 [2024-11-05 11:33:39.931517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.931626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.931637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.765 [2024-11-05 11:33:39.931646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:40.765 [2024-11-05 11:33:39.931655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.931683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.931693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.765 [2024-11-05 11:33:39.931705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.765 [2024-11-05 11:33:39.931713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.931734] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.765 [2024-11-05 11:33:39.935814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.935855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.765 [2024-11-05 11:33:39.935866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.085 ms 00:17:40.765 [2024-11-05 11:33:39.935875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.935950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.935960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.765 [2024-11-05 11:33:39.935970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:40.765 [2024-11-05 11:33:39.935978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.935997] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.765 [2024-11-05 11:33:39.936019] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.765 [2024-11-05 11:33:39.936059] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.765 [2024-11-05 11:33:39.936076] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.765 [2024-11-05 11:33:39.936183] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.765 [2024-11-05 11:33:39.936194] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.765 [2024-11-05 11:33:39.936205] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.765 [2024-11-05 11:33:39.936216] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936225] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936237] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.765 [2024-11-05 11:33:39.936245] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.765 [2024-11-05 11:33:39.936253] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.765 [2024-11-05 11:33:39.936261] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.765 [2024-11-05 11:33:39.936269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.936277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.765 [2024-11-05 11:33:39.936285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.274 ms 00:17:40.765 [2024-11-05 11:33:39.936292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.936381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.765 [2024-11-05 11:33:39.936390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.765 [2024-11-05 11:33:39.936398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:17:40.765 [2024-11-05 11:33:39.936408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.765 [2024-11-05 11:33:39.936509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.765 [2024-11-05 11:33:39.936519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.765 [2024-11-05 11:33:39.936528] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.765 [2024-11-05 11:33:39.936551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.765 [2024-11-05 11:33:39.936575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.765 [2024-11-05 11:33:39.936589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.765 [2024-11-05 11:33:39.936596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.765 [2024-11-05 11:33:39.936602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.765 [2024-11-05 11:33:39.936619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.765 [2024-11-05 11:33:39.936627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.765 [2024-11-05 11:33:39.936634] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.765 [2024-11-05 11:33:39.936648] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.765 [2024-11-05 11:33:39.936669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.765 [2024-11-05 11:33:39.936690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.765 [2024-11-05 11:33:39.936711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.765 [2024-11-05 11:33:39.936725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.765 [2024-11-05 11:33:39.936733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.765 [2024-11-05 11:33:39.936740] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.766 [2024-11-05 11:33:39.936747] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.766 [2024-11-05 11:33:39.936754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.766 [2024-11-05 11:33:39.936760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.766 [2024-11-05 11:33:39.936767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.766 [2024-11-05 11:33:39.936774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.766 [2024-11-05 11:33:39.936781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.766 [2024-11-05 11:33:39.936789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.766 [2024-11-05 11:33:39.936796] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.766 [2024-11-05 11:33:39.936818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.766 [2024-11-05 11:33:39.936825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.766 [2024-11-05 11:33:39.936831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.766 [2024-11-05 11:33:39.936839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.766 [2024-11-05 11:33:39.936845] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.766 [2024-11-05 11:33:39.936853] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.766 [2024-11-05 11:33:39.936860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.766 [2024-11-05 11:33:39.936873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.766 [2024-11-05 11:33:39.936885] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.766 [2024-11-05 11:33:39.936891] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.766 [2024-11-05 11:33:39.936898] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.766 [2024-11-05 11:33:39.936905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.766 [2024-11-05 11:33:39.936912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.766 [2024-11-05 11:33:39.936920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.766 [2024-11-05 11:33:39.936929] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.766 [2024-11-05 11:33:39.936939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.936948] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.766 [2024-11-05 11:33:39.936955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.766 [2024-11-05 11:33:39.936962] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.766 [2024-11-05 11:33:39.936970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.766 [2024-11-05 11:33:39.936977] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.766 [2024-11-05 11:33:39.936985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.766 [2024-11-05 11:33:39.936992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.766 [2024-11-05 11:33:39.936999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.766 [2024-11-05 11:33:39.937006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.766 [2024-11-05 11:33:39.937013] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937035] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937043] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.766 [2024-11-05 11:33:39.937050] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.766 [2024-11-05 11:33:39.937059] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.766 [2024-11-05 11:33:39.937076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.766 [2024-11-05 11:33:39.937085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.766 [2024-11-05 11:33:39.937092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.766 [2024-11-05 11:33:39.937099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:39.937107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.766 [2024-11-05 11:33:39.937115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:17:40.766 [2024-11-05 11:33:39.937127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:39.969369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:39.969421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.766 [2024-11-05 11:33:39.969433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.187 ms 00:17:40.766 [2024-11-05 11:33:39.969441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:39.969577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:39.969588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.766 [2024-11-05 11:33:39.969602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:40.766 [2024-11-05 11:33:39.969611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:40.013640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:40.013695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.766 [2024-11-05 11:33:40.013710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.004 ms 00:17:40.766 [2024-11-05 11:33:40.013720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:40.013869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:40.013883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.766 [2024-11-05 11:33:40.013894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:40.766 [2024-11-05 11:33:40.013903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:40.014461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:40.014510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.766 [2024-11-05 11:33:40.014522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:17:40.766 [2024-11-05 11:33:40.014530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:40.014698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:40.014718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.766 [2024-11-05 11:33:40.014727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.135 ms 00:17:40.766 [2024-11-05 11:33:40.014735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.766 [2024-11-05 11:33:40.031571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.766 [2024-11-05 11:33:40.031620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.766 [2024-11-05 11:33:40.031632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.813 ms 00:17:40.766 [2024-11-05 11:33:40.031641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.046307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:41.028 [2024-11-05 11:33:40.046542] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:41.028 [2024-11-05 11:33:40.046564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.046574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:41.028 [2024-11-05 11:33:40.046584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.794 ms 00:17:41.028 [2024-11-05 11:33:40.046591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.072873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.073077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:41.028 [2024-11-05 11:33:40.073112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.187 ms 00:17:41.028 [2024-11-05 11:33:40.073122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.086023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.086072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:41.028 [2024-11-05 11:33:40.086085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.804 ms 00:17:41.028 [2024-11-05 11:33:40.086092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.098674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.098720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:41.028 [2024-11-05 11:33:40.098734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.488 ms 00:17:41.028 [2024-11-05 11:33:40.098742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.099464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.099502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:41.028 [2024-11-05 11:33:40.099513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.579 ms 00:17:41.028 [2024-11-05 11:33:40.099522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.165881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.165962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:41.028 [2024-11-05 11:33:40.165980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.328 ms 00:17:41.028 [2024-11-05 11:33:40.165990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.177254] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:41.028 [2024-11-05 11:33:40.196858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.196910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:41.028 [2024-11-05 11:33:40.196925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.758 ms 00:17:41.028 [2024-11-05 11:33:40.196934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.197041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.197053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:41.028 [2024-11-05 11:33:40.197067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:41.028 [2024-11-05 11:33:40.197076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.197135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.197146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:41.028 [2024-11-05 11:33:40.197155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:41.028 [2024-11-05 11:33:40.197163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.197189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.197202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:41.028 [2024-11-05 11:33:40.197211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:41.028 [2024-11-05 11:33:40.197222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.197261] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:41.028 [2024-11-05 11:33:40.197272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.028 [2024-11-05 11:33:40.197280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:41.028 [2024-11-05 11:33:40.197289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:41.028 [2024-11-05 11:33:40.197298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.028 [2024-11-05 11:33:40.223690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.029 [2024-11-05 11:33:40.223743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:41.029 [2024-11-05 11:33:40.223766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.369 ms 00:17:41.029 [2024-11-05 11:33:40.223774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.029 [2024-11-05 11:33:40.223917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:41.029 [2024-11-05 11:33:40.223932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:41.029 [2024-11-05 11:33:40.223942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:17:41.029 [2024-11-05 11:33:40.223951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:41.029 [2024-11-05 11:33:40.225208] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.029 [2024-11-05 11:33:40.228997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.533 ms, result 0 00:17:41.029 [2024-11-05 11:33:40.230158] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:41.029 [2024-11-05 11:33:40.243716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:41.972  [2024-11-05T11:33:42.654Z] Copying: 20/256 [MB] (20 MBps) [2024-11-05T11:33:43.595Z] Copying: 38/256 [MB] (17 MBps) [2024-11-05T11:33:44.538Z] Copying: 85/256 [MB] (47 MBps) [2024-11-05T11:33:45.483Z] Copying: 132/256 [MB] (47 MBps) [2024-11-05T11:33:46.427Z] Copying: 148/256 [MB] (15 MBps) [2024-11-05T11:33:47.370Z] Copying: 160/256 [MB] (12 MBps) [2024-11-05T11:33:48.315Z] Copying: 171/256 [MB] (10 MBps) [2024-11-05T11:33:49.256Z] Copying: 181/256 [MB] (10 MBps) [2024-11-05T11:33:50.664Z] Copying: 196244/262144 [kB] (9916 kBps) [2024-11-05T11:33:51.615Z] Copying: 204/256 [MB] (12 MBps) [2024-11-05T11:33:51.615Z] Copying: 252/256 [MB] (48 MBps) [2024-11-05T11:33:51.615Z] Copying: 256/256 [MB] (average 23 MBps)[2024-11-05 11:33:51.314827] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:52.341 [2024-11-05 11:33:51.321949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.341 [2024-11-05 11:33:51.321980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.341 [2024-11-05 11:33:51.321991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:52.342 [2024-11-05 11:33:51.321998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.322015] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.342 [2024-11-05 11:33:51.324083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.324108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.342 [2024-11-05 11:33:51.324120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.059 ms 00:17:52.342 [2024-11-05 11:33:51.324127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.325608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.325710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.342 [2024-11-05 11:33:51.325723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.463 ms 00:17:52.342 [2024-11-05 11:33:51.325729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.331367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.331392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.342 [2024-11-05 11:33:51.331399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.622 ms 00:17:52.342 [2024-11-05 11:33:51.331409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.336759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.336867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.342 [2024-11-05 11:33:51.336880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.326 ms 00:17:52.342 [2024-11-05 11:33:51.336886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.354126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.354150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.342 [2024-11-05 11:33:51.354158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:17:52.342 [2024-11-05 11:33:51.354165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.365600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.365626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.342 [2024-11-05 11:33:51.365634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.408 ms 00:17:52.342 [2024-11-05 11:33:51.365645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.365738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.365746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.342 [2024-11-05 11:33:51.365752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:52.342 [2024-11-05 11:33:51.365758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.383313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.383338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.342 [2024-11-05 11:33:51.383345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.543 ms 00:17:52.342 [2024-11-05 11:33:51.383351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.400557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.400580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.342 [2024-11-05 11:33:51.400587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.172 ms 00:17:52.342 [2024-11-05 11:33:51.400592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.417463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.417486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.342 [2024-11-05 11:33:51.417494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.845 ms 00:17:52.342 [2024-11-05 11:33:51.417499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.434355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.342 [2024-11-05 11:33:51.434378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.342 [2024-11-05 11:33:51.434385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.812 ms 00:17:52.342 [2024-11-05 11:33:51.434390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.342 [2024-11-05 11:33:51.434415] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.342 [2024-11-05 11:33:51.434426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.342 [2024-11-05 11:33:51.434682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.434997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.343 [2024-11-05 11:33:51.435042] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.343 [2024-11-05 11:33:51.435050] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:17:52.343 [2024-11-05 11:33:51.435055] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.343 [2024-11-05 11:33:51.435061] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.343 [2024-11-05 11:33:51.435067] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.343 [2024-11-05 11:33:51.435072] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.343 [2024-11-05 11:33:51.435077] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.343 [2024-11-05 11:33:51.435082] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.343 [2024-11-05 11:33:51.435088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.343 [2024-11-05 11:33:51.435093] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.343 [2024-11-05 11:33:51.435098] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.343 [2024-11-05 11:33:51.435103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.343 [2024-11-05 11:33:51.435108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.343 [2024-11-05 11:33:51.435114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.688 ms 00:17:52.343 [2024-11-05 11:33:51.435120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.444627] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.343 [2024-11-05 11:33:51.444650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.343 [2024-11-05 11:33:51.444657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.493 ms 00:17:52.343 [2024-11-05 11:33:51.444663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.444952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.343 [2024-11-05 11:33:51.444961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.343 [2024-11-05 11:33:51.444970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:17:52.343 [2024-11-05 11:33:51.444976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.472024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.343 [2024-11-05 11:33:51.472050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.343 [2024-11-05 11:33:51.472057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.343 [2024-11-05 11:33:51.472063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.472118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.343 [2024-11-05 11:33:51.472124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.343 [2024-11-05 11:33:51.472132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.343 [2024-11-05 11:33:51.472137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.472167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.343 [2024-11-05 11:33:51.472174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.343 [2024-11-05 11:33:51.472180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.343 [2024-11-05 11:33:51.472185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.472198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.343 [2024-11-05 11:33:51.472204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.343 [2024-11-05 11:33:51.472210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.343 [2024-11-05 11:33:51.472217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.531463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.343 [2024-11-05 11:33:51.531495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.343 [2024-11-05 11:33:51.531505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.343 [2024-11-05 11:33:51.531511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.343 [2024-11-05 11:33:51.580683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.580714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.344 [2024-11-05 11:33:51.580723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.580732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.580771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.580779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.344 [2024-11-05 11:33:51.580785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.580791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.580824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.580832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.344 [2024-11-05 11:33:51.580838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.580844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.580915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.580923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.344 [2024-11-05 11:33:51.580929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.580935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.580958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.580965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.344 [2024-11-05 11:33:51.580971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.580977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.581007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.581014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.344 [2024-11-05 11:33:51.581020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.581026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.581060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.344 [2024-11-05 11:33:51.581067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.344 [2024-11-05 11:33:51.581074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.344 [2024-11-05 11:33:51.581080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.344 [2024-11-05 11:33:51.581182] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.231 ms, result 0 00:17:53.287 00:17:53.287 00:17:53.287 11:33:52 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=73788 00:17:53.287 11:33:52 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 73788 00:17:53.287 11:33:52 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 73788 ']' 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:53.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:53.287 11:33:52 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:53.287 [2024-11-05 11:33:52.423334] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:53.287 [2024-11-05 11:33:52.423421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73788 ] 00:17:53.548 [2024-11-05 11:33:52.569361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.548 [2024-11-05 11:33:52.644188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:54.120 11:33:53 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:54.120 11:33:53 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:17:54.120 11:33:53 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:54.381 [2024-11-05 11:33:53.470437] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.381 [2024-11-05 11:33:53.470609] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:54.381 [2024-11-05 11:33:53.642158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.381 [2024-11-05 11:33:53.642196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:54.381 [2024-11-05 11:33:53.642208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:54.381 [2024-11-05 11:33:53.642215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.381 [2024-11-05 11:33:53.644307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.381 [2024-11-05 11:33:53.644337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:54.381 [2024-11-05 11:33:53.644346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.077 ms 00:17:54.381 [2024-11-05 11:33:53.644352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.381 [2024-11-05 11:33:53.644409] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:54.381 [2024-11-05 11:33:53.644929] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:54.381 [2024-11-05 11:33:53.645045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.381 [2024-11-05 11:33:53.645055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:54.381 [2024-11-05 11:33:53.645063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:54.381 [2024-11-05 11:33:53.645068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.381 [2024-11-05 11:33:53.646025] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:54.381 [2024-11-05 11:33:53.655573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.381 [2024-11-05 11:33:53.655606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:54.381 [2024-11-05 11:33:53.655614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.553 ms 00:17:54.381 [2024-11-05 11:33:53.655622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.381 [2024-11-05 11:33:53.655680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.381 [2024-11-05 11:33:53.655690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:54.381 [2024-11-05 11:33:53.655696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:54.381 [2024-11-05 11:33:53.655703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.642 [2024-11-05 11:33:53.660025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.642 [2024-11-05 11:33:53.660149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:54.642 [2024-11-05 11:33:53.660161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.287 ms 00:17:54.642 [2024-11-05 11:33:53.660168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.642 [2024-11-05 11:33:53.660247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.642 [2024-11-05 11:33:53.660256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:54.643 [2024-11-05 11:33:53.660263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:17:54.643 [2024-11-05 11:33:53.660269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.660287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.643 [2024-11-05 11:33:53.660297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:54.643 [2024-11-05 11:33:53.660303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.643 [2024-11-05 11:33:53.660310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.660326] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:54.643 [2024-11-05 11:33:53.662948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.643 [2024-11-05 11:33:53.662970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:54.643 [2024-11-05 11:33:53.662979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.624 ms 00:17:54.643 [2024-11-05 11:33:53.662985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.663012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.643 [2024-11-05 11:33:53.663019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:54.643 [2024-11-05 11:33:53.663026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.643 [2024-11-05 11:33:53.663031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.663048] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:54.643 [2024-11-05 11:33:53.663062] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:54.643 [2024-11-05 11:33:53.663093] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:54.643 [2024-11-05 11:33:53.663105] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:54.643 [2024-11-05 11:33:53.663185] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:54.643 [2024-11-05 11:33:53.663194] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:54.643 [2024-11-05 11:33:53.663203] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:54.643 [2024-11-05 11:33:53.663211] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663220] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663226] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:54.643 [2024-11-05 11:33:53.663233] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:54.643 [2024-11-05 11:33:53.663239] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:54.643 [2024-11-05 11:33:53.663247] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:54.643 [2024-11-05 11:33:53.663253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.643 [2024-11-05 11:33:53.663260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:54.643 [2024-11-05 11:33:53.663265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.208 ms 00:17:54.643 [2024-11-05 11:33:53.663272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.663337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.643 [2024-11-05 11:33:53.663344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:54.643 [2024-11-05 11:33:53.663351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:54.643 [2024-11-05 11:33:53.663357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.643 [2024-11-05 11:33:53.663433] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:54.643 [2024-11-05 11:33:53.663441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:54.643 [2024-11-05 11:33:53.663447] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:54.643 [2024-11-05 11:33:53.663466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:54.643 [2024-11-05 11:33:53.663485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.643 [2024-11-05 11:33:53.663496] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:54.643 [2024-11-05 11:33:53.663503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:54.643 [2024-11-05 11:33:53.663508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:54.643 [2024-11-05 11:33:53.663514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:54.643 [2024-11-05 11:33:53.663519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:54.643 [2024-11-05 11:33:53.663527] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:54.643 [2024-11-05 11:33:53.663538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:54.643 [2024-11-05 11:33:53.663560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:54.643 [2024-11-05 11:33:53.663579] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663591] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:54.643 [2024-11-05 11:33:53.663596] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:54.643 [2024-11-05 11:33:53.663613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:54.643 [2024-11-05 11:33:53.663630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663636] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.643 [2024-11-05 11:33:53.663641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:54.643 [2024-11-05 11:33:53.663647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:54.643 [2024-11-05 11:33:53.663652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:54.643 [2024-11-05 11:33:53.663658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:54.643 [2024-11-05 11:33:53.663663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:54.643 [2024-11-05 11:33:53.663671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:54.643 [2024-11-05 11:33:53.663681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:54.643 [2024-11-05 11:33:53.663687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663694] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:54.643 [2024-11-05 11:33:53.663699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:54.643 [2024-11-05 11:33:53.663706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663713] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:54.643 [2024-11-05 11:33:53.663720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:54.643 [2024-11-05 11:33:53.663725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:54.643 [2024-11-05 11:33:53.663732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:54.643 [2024-11-05 11:33:53.663737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:54.643 [2024-11-05 11:33:53.663743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:54.643 [2024-11-05 11:33:53.663748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:54.643 [2024-11-05 11:33:53.663756] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:54.643 [2024-11-05 11:33:53.663762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.643 [2024-11-05 11:33:53.663771] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:54.643 [2024-11-05 11:33:53.663777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:54.643 [2024-11-05 11:33:53.663784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:54.643 [2024-11-05 11:33:53.663789] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:54.643 [2024-11-05 11:33:53.663795] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:54.643 [2024-11-05 11:33:53.663810] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:54.643 [2024-11-05 11:33:53.663817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:54.643 [2024-11-05 11:33:53.663822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:54.643 [2024-11-05 11:33:53.663829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:54.643 [2024-11-05 11:33:53.663835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663846] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:54.644 [2024-11-05 11:33:53.663867] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:54.644 [2024-11-05 11:33:53.663873] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663881] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:54.644 [2024-11-05 11:33:53.663886] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:54.644 [2024-11-05 11:33:53.663893] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:54.644 [2024-11-05 11:33:53.663899] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:54.644 [2024-11-05 11:33:53.663906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.663912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:54.644 [2024-11-05 11:33:53.663919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:54.644 [2024-11-05 11:33:53.663925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.684391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.684419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:54.644 [2024-11-05 11:33:53.684428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.423 ms 00:17:54.644 [2024-11-05 11:33:53.684434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.684523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.684532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:54.644 [2024-11-05 11:33:53.684540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:54.644 [2024-11-05 11:33:53.684545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.708184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.708208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:54.644 [2024-11-05 11:33:53.708219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.621 ms 00:17:54.644 [2024-11-05 11:33:53.708226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.708269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.708276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:54.644 [2024-11-05 11:33:53.708283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:54.644 [2024-11-05 11:33:53.708289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.708553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.708565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:54.644 [2024-11-05 11:33:53.708572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:17:54.644 [2024-11-05 11:33:53.708578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.708675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.708682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:54.644 [2024-11-05 11:33:53.708689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:17:54.644 [2024-11-05 11:33:53.708694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.720123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.720146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:54.644 [2024-11-05 11:33:53.720155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.411 ms 00:17:54.644 [2024-11-05 11:33:53.720160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.729710] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:54.644 [2024-11-05 11:33:53.729736] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.644 [2024-11-05 11:33:53.729747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.729753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.644 [2024-11-05 11:33:53.729761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.514 ms 00:17:54.644 [2024-11-05 11:33:53.729767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.748304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.748329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.644 [2024-11-05 11:33:53.748339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.480 ms 00:17:54.644 [2024-11-05 11:33:53.748346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.757340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.757365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.644 [2024-11-05 11:33:53.757376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.930 ms 00:17:54.644 [2024-11-05 11:33:53.757381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.765865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.765898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.644 [2024-11-05 11:33:53.765907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.442 ms 00:17:54.644 [2024-11-05 11:33:53.765913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.766367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.766383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.644 [2024-11-05 11:33:53.766392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.394 ms 00:17:54.644 [2024-11-05 11:33:53.766397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.817197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.817238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.644 [2024-11-05 11:33:53.817253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.780 ms 00:17:54.644 [2024-11-05 11:33:53.817259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.824877] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.644 [2024-11-05 11:33:53.836082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.836202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.644 [2024-11-05 11:33:53.836216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.762 ms 00:17:54.644 [2024-11-05 11:33:53.836224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.836283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.836292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.644 [2024-11-05 11:33:53.836299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.644 [2024-11-05 11:33:53.836306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.836346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.836354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.644 [2024-11-05 11:33:53.836361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:54.644 [2024-11-05 11:33:53.836368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.836388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.836396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.644 [2024-11-05 11:33:53.836402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.644 [2024-11-05 11:33:53.836408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.836434] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.644 [2024-11-05 11:33:53.836444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.836450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.644 [2024-11-05 11:33:53.836457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:54.644 [2024-11-05 11:33:53.836464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.854130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.854157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.644 [2024-11-05 11:33:53.854167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.646 ms 00:17:54.644 [2024-11-05 11:33:53.854174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.854255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.644 [2024-11-05 11:33:53.854263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.644 [2024-11-05 11:33:53.854270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:17:54.644 [2024-11-05 11:33:53.854276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.644 [2024-11-05 11:33:53.854918] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.644 [2024-11-05 11:33:53.857191] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 212.511 ms, result 0 00:17:54.644 [2024-11-05 11:33:53.858037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.644 Some configs were skipped because the RPC state that can call them passed over. 00:17:54.644 11:33:53 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:54.905 [2024-11-05 11:33:54.086356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.905 [2024-11-05 11:33:54.086466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:54.905 [2024-11-05 11:33:54.086518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.428 ms 00:17:54.905 [2024-11-05 11:33:54.086554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.905 [2024-11-05 11:33:54.086596] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.669 ms, result 0 00:17:54.905 true 00:17:54.905 11:33:54 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:55.166 [2024-11-05 11:33:54.282147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.166 [2024-11-05 11:33:54.282244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:55.166 [2024-11-05 11:33:54.282285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.033 ms 00:17:55.166 [2024-11-05 11:33:54.282303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.166 [2024-11-05 11:33:54.282342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.228 ms, result 0 00:17:55.166 true 00:17:55.166 11:33:54 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 73788 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 73788 ']' 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 73788 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73788 00:17:55.166 killing process with pid 73788 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73788' 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 73788 00:17:55.166 11:33:54 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 73788 00:17:55.740 [2024-11-05 11:33:54.849097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.849146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:55.740 [2024-11-05 11:33:54.849156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:55.740 [2024-11-05 11:33:54.849164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.849192] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:55.740 [2024-11-05 11:33:54.851260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.851285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:55.740 [2024-11-05 11:33:54.851298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.054 ms 00:17:55.740 [2024-11-05 11:33:54.851304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.851519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.851526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:55.740 [2024-11-05 11:33:54.851534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:17:55.740 [2024-11-05 11:33:54.851540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.854765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.854791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:55.740 [2024-11-05 11:33:54.854799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.210 ms 00:17:55.740 [2024-11-05 11:33:54.854814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.860061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.860171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:55.740 [2024-11-05 11:33:54.860189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.219 ms 00:17:55.740 [2024-11-05 11:33:54.860195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.867381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.867479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:55.740 [2024-11-05 11:33:54.867494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.131 ms 00:17:55.740 [2024-11-05 11:33:54.867505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.873989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.874076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:55.740 [2024-11-05 11:33:54.874127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.455 ms 00:17:55.740 [2024-11-05 11:33:54.874147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.874259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.874326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:55.740 [2024-11-05 11:33:54.874373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:55.740 [2024-11-05 11:33:54.874388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.882200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.882290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:55.740 [2024-11-05 11:33:54.882379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.786 ms 00:17:55.740 [2024-11-05 11:33:54.882397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.889993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.890077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:55.740 [2024-11-05 11:33:54.890124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.561 ms 00:17:55.740 [2024-11-05 11:33:54.890140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.897227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.897310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:55.740 [2024-11-05 11:33:54.897351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.019 ms 00:17:55.740 [2024-11-05 11:33:54.897368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.904276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.740 [2024-11-05 11:33:54.904355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:55.740 [2024-11-05 11:33:54.904394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.853 ms 00:17:55.740 [2024-11-05 11:33:54.904411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.740 [2024-11-05 11:33:54.904452] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:55.740 [2024-11-05 11:33:54.904510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:55.740 [2024-11-05 11:33:54.904887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.904910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.904932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.904955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.905995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.906955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:55.741 [2024-11-05 11:33:54.907695] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:55.742 [2024-11-05 11:33:54.907712] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:17:55.742 [2024-11-05 11:33:54.907739] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:55.742 [2024-11-05 11:33:54.907783] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:55.742 [2024-11-05 11:33:54.907810] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:55.742 [2024-11-05 11:33:54.907835] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:55.742 [2024-11-05 11:33:54.907849] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:55.742 [2024-11-05 11:33:54.907865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:55.742 [2024-11-05 11:33:54.907880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:55.742 [2024-11-05 11:33:54.907894] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:55.742 [2024-11-05 11:33:54.907932] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:55.742 [2024-11-05 11:33:54.907972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.742 [2024-11-05 11:33:54.907989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:55.742 [2024-11-05 11:33:54.908024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.520 ms 00:17:55.742 [2024-11-05 11:33:54.908041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.917482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.742 [2024-11-05 11:33:54.917561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:55.742 [2024-11-05 11:33:54.917602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.413 ms 00:17:55.742 [2024-11-05 11:33:54.917619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.917926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.742 [2024-11-05 11:33:54.917985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:55.742 [2024-11-05 11:33:54.918034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.261 ms 00:17:55.742 [2024-11-05 11:33:54.918052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.952405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.742 [2024-11-05 11:33:54.952489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:55.742 [2024-11-05 11:33:54.952503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.742 [2024-11-05 11:33:54.952510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.952581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.742 [2024-11-05 11:33:54.952589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:55.742 [2024-11-05 11:33:54.952596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.742 [2024-11-05 11:33:54.952602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.952637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.742 [2024-11-05 11:33:54.952644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:55.742 [2024-11-05 11:33:54.952653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.742 [2024-11-05 11:33:54.952659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:54.952674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.742 [2024-11-05 11:33:54.952680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:55.742 [2024-11-05 11:33:54.952687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.742 [2024-11-05 11:33:54.952693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.742 [2024-11-05 11:33:55.010923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:55.742 [2024-11-05 11:33:55.011042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:55.742 [2024-11-05 11:33:55.011057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:55.742 [2024-11-05 11:33:55.011064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:56.004 [2024-11-05 11:33:55.059204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:56.004 [2024-11-05 11:33:55.059285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:56.004 [2024-11-05 11:33:55.059329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:56.004 [2024-11-05 11:33:55.059419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:56.004 [2024-11-05 11:33:55.059464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:56.004 [2024-11-05 11:33:55.059516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:56.004 [2024-11-05 11:33:55.059564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:56.004 [2024-11-05 11:33:55.059571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:56.004 [2024-11-05 11:33:55.059577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:56.004 [2024-11-05 11:33:55.059682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 210.568 ms, result 0 00:17:56.578 11:33:55 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:56.578 11:33:55 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:56.578 [2024-11-05 11:33:55.628848] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:17:56.578 [2024-11-05 11:33:55.628964] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73835 ] 00:17:56.578 [2024-11-05 11:33:55.784156] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.839 [2024-11-05 11:33:55.858496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:56.840 [2024-11-05 11:33:56.062207] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:56.840 [2024-11-05 11:33:56.062256] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:57.099 [2024-11-05 11:33:56.213792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.099 [2024-11-05 11:33:56.213839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:57.099 [2024-11-05 11:33:56.213850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.099 [2024-11-05 11:33:56.213856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.099 [2024-11-05 11:33:56.215997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.099 [2024-11-05 11:33:56.216027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:57.099 [2024-11-05 11:33:56.216034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.129 ms 00:17:57.099 [2024-11-05 11:33:56.216040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.099 [2024-11-05 11:33:56.216094] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:57.099 [2024-11-05 11:33:56.216602] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:57.100 [2024-11-05 11:33:56.216623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.216629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:57.100 [2024-11-05 11:33:56.216636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:17:57.100 [2024-11-05 11:33:56.216641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.217672] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:57.100 [2024-11-05 11:33:56.227240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.227367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:57.100 [2024-11-05 11:33:56.227384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.569 ms 00:17:57.100 [2024-11-05 11:33:56.227390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.227455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.227463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:57.100 [2024-11-05 11:33:56.227470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:57.100 [2024-11-05 11:33:56.227476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.231727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.231755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:57.100 [2024-11-05 11:33:56.231763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.223 ms 00:17:57.100 [2024-11-05 11:33:56.231769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.231846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.231855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:57.100 [2024-11-05 11:33:56.231862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:57.100 [2024-11-05 11:33:56.231867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.231886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.231892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:57.100 [2024-11-05 11:33:56.231900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:57.100 [2024-11-05 11:33:56.231906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.231923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:57.100 [2024-11-05 11:33:56.234492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.234601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:57.100 [2024-11-05 11:33:56.234612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.573 ms 00:17:57.100 [2024-11-05 11:33:56.234618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.234646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.234653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:57.100 [2024-11-05 11:33:56.234659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:57.100 [2024-11-05 11:33:56.234664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.234677] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:57.100 [2024-11-05 11:33:56.234695] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:57.100 [2024-11-05 11:33:56.234726] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:57.100 [2024-11-05 11:33:56.234737] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:57.100 [2024-11-05 11:33:56.234829] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:57.100 [2024-11-05 11:33:56.234838] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:57.100 [2024-11-05 11:33:56.234846] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:57.100 [2024-11-05 11:33:56.234854] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:57.100 [2024-11-05 11:33:56.234861] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:57.100 [2024-11-05 11:33:56.234869] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:57.100 [2024-11-05 11:33:56.234875] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:57.100 [2024-11-05 11:33:56.234880] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:57.100 [2024-11-05 11:33:56.234886] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:57.100 [2024-11-05 11:33:56.234892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.234898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:57.100 [2024-11-05 11:33:56.234903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:17:57.100 [2024-11-05 11:33:56.234909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.234974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.100 [2024-11-05 11:33:56.234981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:57.100 [2024-11-05 11:33:56.234987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:57.100 [2024-11-05 11:33:56.234994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.100 [2024-11-05 11:33:56.235066] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:57.100 [2024-11-05 11:33:56.235073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:57.100 [2024-11-05 11:33:56.235079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235085] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:57.100 [2024-11-05 11:33:56.235096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:57.100 [2024-11-05 11:33:56.235113] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.100 [2024-11-05 11:33:56.235124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:57.100 [2024-11-05 11:33:56.235129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:57.100 [2024-11-05 11:33:56.235134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:57.100 [2024-11-05 11:33:56.235143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:57.100 [2024-11-05 11:33:56.235148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:57.100 [2024-11-05 11:33:56.235153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:57.100 [2024-11-05 11:33:56.235165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:57.100 [2024-11-05 11:33:56.235180] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:57.100 [2024-11-05 11:33:56.235195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:57.100 [2024-11-05 11:33:56.235209] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:57.100 [2024-11-05 11:33:56.235224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:57.100 [2024-11-05 11:33:56.235234] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:57.100 [2024-11-05 11:33:56.235239] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:57.100 [2024-11-05 11:33:56.235244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.100 [2024-11-05 11:33:56.235249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:57.100 [2024-11-05 11:33:56.235254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:57.100 [2024-11-05 11:33:56.235259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:57.100 [2024-11-05 11:33:56.235264] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:57.100 [2024-11-05 11:33:56.235268] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:57.101 [2024-11-05 11:33:56.235274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.101 [2024-11-05 11:33:56.235279] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:57.101 [2024-11-05 11:33:56.235283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:57.101 [2024-11-05 11:33:56.235288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.101 [2024-11-05 11:33:56.235294] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:57.101 [2024-11-05 11:33:56.235300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:57.101 [2024-11-05 11:33:56.235305] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:57.101 [2024-11-05 11:33:56.235311] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:57.101 [2024-11-05 11:33:56.235318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:57.101 [2024-11-05 11:33:56.235324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:57.101 [2024-11-05 11:33:56.235329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:57.101 [2024-11-05 11:33:56.235335] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:57.101 [2024-11-05 11:33:56.235339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:57.101 [2024-11-05 11:33:56.235344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:57.101 [2024-11-05 11:33:56.235350] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:57.101 [2024-11-05 11:33:56.235357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:57.101 [2024-11-05 11:33:56.235369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:57.101 [2024-11-05 11:33:56.235374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:57.101 [2024-11-05 11:33:56.235380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:57.101 [2024-11-05 11:33:56.235385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:57.101 [2024-11-05 11:33:56.235391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:57.101 [2024-11-05 11:33:56.235396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:57.101 [2024-11-05 11:33:56.235401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:57.101 [2024-11-05 11:33:56.235406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:57.101 [2024-11-05 11:33:56.235412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235417] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:57.101 [2024-11-05 11:33:56.235439] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:57.101 [2024-11-05 11:33:56.235445] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:57.101 [2024-11-05 11:33:56.235457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:57.101 [2024-11-05 11:33:56.235462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:57.101 [2024-11-05 11:33:56.235467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:57.101 [2024-11-05 11:33:56.235473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.235478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:57.101 [2024-11-05 11:33:56.235484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.459 ms 00:17:57.101 [2024-11-05 11:33:56.235490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.255964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.255990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:57.101 [2024-11-05 11:33:56.255998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.437 ms 00:17:57.101 [2024-11-05 11:33:56.256003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.256096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.256103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:57.101 [2024-11-05 11:33:56.256113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:57.101 [2024-11-05 11:33:56.256118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.303897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.303927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:57.101 [2024-11-05 11:33:56.303936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.762 ms 00:17:57.101 [2024-11-05 11:33:56.303943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.304000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.304010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:57.101 [2024-11-05 11:33:56.304017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:57.101 [2024-11-05 11:33:56.304022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.304295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.304306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:57.101 [2024-11-05 11:33:56.304313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:17:57.101 [2024-11-05 11:33:56.304319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.304418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.304428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:57.101 [2024-11-05 11:33:56.304435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:17:57.101 [2024-11-05 11:33:56.304441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.315089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.315114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:57.101 [2024-11-05 11:33:56.315122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.634 ms 00:17:57.101 [2024-11-05 11:33:56.315127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.324704] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:57.101 [2024-11-05 11:33:56.324819] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:57.101 [2024-11-05 11:33:56.324832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.324839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:57.101 [2024-11-05 11:33:56.324846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.614 ms 00:17:57.101 [2024-11-05 11:33:56.324851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.343271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.343302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:57.101 [2024-11-05 11:33:56.343310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.377 ms 00:17:57.101 [2024-11-05 11:33:56.343317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.352284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.352380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:57.101 [2024-11-05 11:33:56.352393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.916 ms 00:17:57.101 [2024-11-05 11:33:56.352399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.361109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.361134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:57.101 [2024-11-05 11:33:56.361141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.671 ms 00:17:57.101 [2024-11-05 11:33:56.361147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.101 [2024-11-05 11:33:56.361611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.101 [2024-11-05 11:33:56.361632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:57.101 [2024-11-05 11:33:56.361639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:17:57.101 [2024-11-05 11:33:56.361645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.362 [2024-11-05 11:33:56.405171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.405308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:57.363 [2024-11-05 11:33:56.405324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.509 ms 00:17:57.363 [2024-11-05 11:33:56.405331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.413134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:57.363 [2024-11-05 11:33:56.424226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.424331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:57.363 [2024-11-05 11:33:56.424344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.839 ms 00:17:57.363 [2024-11-05 11:33:56.424350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.424418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.424428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:57.363 [2024-11-05 11:33:56.424435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:57.363 [2024-11-05 11:33:56.424442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.424477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.424484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:57.363 [2024-11-05 11:33:56.424490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:57.363 [2024-11-05 11:33:56.424496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.424517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.424524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:57.363 [2024-11-05 11:33:56.424532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:57.363 [2024-11-05 11:33:56.424538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.424561] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:57.363 [2024-11-05 11:33:56.424569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.424574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:57.363 [2024-11-05 11:33:56.424580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:57.363 [2024-11-05 11:33:56.424586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.442454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.442485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:57.363 [2024-11-05 11:33:56.442493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.853 ms 00:17:57.363 [2024-11-05 11:33:56.442508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.442578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:57.363 [2024-11-05 11:33:56.442586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:57.363 [2024-11-05 11:33:56.442592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:57.363 [2024-11-05 11:33:56.442599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:57.363 [2024-11-05 11:33:56.443216] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:57.363 [2024-11-05 11:33:56.445536] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 229.206 ms, result 0 00:17:57.363 [2024-11-05 11:33:56.446105] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:57.363 [2024-11-05 11:33:56.460936] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:58.307  [2024-11-05T11:33:58.551Z] Copying: 17/256 [MB] (17 MBps) [2024-11-05T11:33:59.496Z] Copying: 35/256 [MB] (17 MBps) [2024-11-05T11:34:00.886Z] Copying: 49/256 [MB] (14 MBps) [2024-11-05T11:34:01.829Z] Copying: 70/256 [MB] (21 MBps) [2024-11-05T11:34:02.772Z] Copying: 95/256 [MB] (24 MBps) [2024-11-05T11:34:03.717Z] Copying: 119/256 [MB] (24 MBps) [2024-11-05T11:34:04.662Z] Copying: 139/256 [MB] (19 MBps) [2024-11-05T11:34:05.602Z] Copying: 158/256 [MB] (19 MBps) [2024-11-05T11:34:06.542Z] Copying: 177/256 [MB] (18 MBps) [2024-11-05T11:34:07.490Z] Copying: 199/256 [MB] (21 MBps) [2024-11-05T11:34:08.874Z] Copying: 216/256 [MB] (17 MBps) [2024-11-05T11:34:09.448Z] Copying: 238/256 [MB] (21 MBps) [2024-11-05T11:34:09.448Z] Copying: 256/256 [MB] (average 19 MBps)[2024-11-05 11:34:09.414657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:10.174 [2024-11-05 11:34:09.424816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.174 [2024-11-05 11:34:09.425026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:10.174 [2024-11-05 11:34:09.425051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:10.174 [2024-11-05 11:34:09.425062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.174 [2024-11-05 11:34:09.425093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:10.174 [2024-11-05 11:34:09.428128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.174 [2024-11-05 11:34:09.428313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:10.174 [2024-11-05 11:34:09.428334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.017 ms 00:18:10.175 [2024-11-05 11:34:09.428343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.175 [2024-11-05 11:34:09.428632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.175 [2024-11-05 11:34:09.428643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:10.175 [2024-11-05 11:34:09.428653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:18:10.175 [2024-11-05 11:34:09.428661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.175 [2024-11-05 11:34:09.432373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.175 [2024-11-05 11:34:09.432399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:10.175 [2024-11-05 11:34:09.432416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.696 ms 00:18:10.175 [2024-11-05 11:34:09.432424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.175 [2024-11-05 11:34:09.439428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.175 [2024-11-05 11:34:09.439600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:10.175 [2024-11-05 11:34:09.439621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.986 ms 00:18:10.175 [2024-11-05 11:34:09.439629] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.465243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.465295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:10.438 [2024-11-05 11:34:09.465309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.541 ms 00:18:10.438 [2024-11-05 11:34:09.465317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.482005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.482198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:10.438 [2024-11-05 11:34:09.482227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.632 ms 00:18:10.438 [2024-11-05 11:34:09.482236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.482388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.482400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:10.438 [2024-11-05 11:34:09.482409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:10.438 [2024-11-05 11:34:09.482417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.508577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.508625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:10.438 [2024-11-05 11:34:09.508637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.134 ms 00:18:10.438 [2024-11-05 11:34:09.508645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.534031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.534079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:10.438 [2024-11-05 11:34:09.534091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.322 ms 00:18:10.438 [2024-11-05 11:34:09.534098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.559012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.559059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:10.438 [2024-11-05 11:34:09.559071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.844 ms 00:18:10.438 [2024-11-05 11:34:09.559078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.583512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.438 [2024-11-05 11:34:09.583559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:10.438 [2024-11-05 11:34:09.583571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.352 ms 00:18:10.438 [2024-11-05 11:34:09.583578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.438 [2024-11-05 11:34:09.583626] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:10.438 [2024-11-05 11:34:09.583649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.583998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:10.438 [2024-11-05 11:34:09.584089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:10.439 [2024-11-05 11:34:09.584489] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:10.439 [2024-11-05 11:34:09.584498] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:18:10.439 [2024-11-05 11:34:09.584507] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:10.439 [2024-11-05 11:34:09.584515] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:10.439 [2024-11-05 11:34:09.584523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:10.439 [2024-11-05 11:34:09.584531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:10.439 [2024-11-05 11:34:09.584539] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:10.439 [2024-11-05 11:34:09.584547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:10.439 [2024-11-05 11:34:09.584554] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:10.439 [2024-11-05 11:34:09.584561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:10.439 [2024-11-05 11:34:09.584567] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:10.439 [2024-11-05 11:34:09.584575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.439 [2024-11-05 11:34:09.584582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:10.439 [2024-11-05 11:34:09.584592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.949 ms 00:18:10.439 [2024-11-05 11:34:09.584603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.598390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.439 [2024-11-05 11:34:09.598433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:10.439 [2024-11-05 11:34:09.598446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.751 ms 00:18:10.439 [2024-11-05 11:34:09.598454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.598905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:10.439 [2024-11-05 11:34:09.598930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:10.439 [2024-11-05 11:34:09.598940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:18:10.439 [2024-11-05 11:34:09.598947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.637965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.439 [2024-11-05 11:34:09.638015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:10.439 [2024-11-05 11:34:09.638027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.439 [2024-11-05 11:34:09.638036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.638137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.439 [2024-11-05 11:34:09.638151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:10.439 [2024-11-05 11:34:09.638160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.439 [2024-11-05 11:34:09.638167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.638224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.439 [2024-11-05 11:34:09.638233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:10.439 [2024-11-05 11:34:09.638241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.439 [2024-11-05 11:34:09.638248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.439 [2024-11-05 11:34:09.638265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.439 [2024-11-05 11:34:09.638274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:10.439 [2024-11-05 11:34:09.638286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.439 [2024-11-05 11:34:09.638293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.701 [2024-11-05 11:34:09.724549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.701 [2024-11-05 11:34:09.724606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:10.701 [2024-11-05 11:34:09.724619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.724627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.795783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.795864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:10.702 [2024-11-05 11:34:09.795884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.795892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.795973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.795982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:10.702 [2024-11-05 11:34:09.795991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.795999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.796042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:10.702 [2024-11-05 11:34:09.796052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.796061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.796177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:10.702 [2024-11-05 11:34:09.796186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.796194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.796240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:10.702 [2024-11-05 11:34:09.796248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.796256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.796313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:10.702 [2024-11-05 11:34:09.796321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.796330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:10.702 [2024-11-05 11:34:09.796399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:10.702 [2024-11-05 11:34:09.796409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:10.702 [2024-11-05 11:34:09.796417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:10.702 [2024-11-05 11:34:09.796579] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 371.761 ms, result 0 00:18:11.273 00:18:11.273 00:18:11.534 11:34:10 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:18:11.534 11:34:10 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:12.108 11:34:11 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:12.108 [2024-11-05 11:34:11.204947] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:12.108 [2024-11-05 11:34:11.205095] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74006 ] 00:18:12.108 [2024-11-05 11:34:11.369140] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.370 [2024-11-05 11:34:11.490150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.631 [2024-11-05 11:34:11.779174] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.631 [2024-11-05 11:34:11.779542] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:12.894 [2024-11-05 11:34:11.941616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.941877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:12.894 [2024-11-05 11:34:11.941903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:12.894 [2024-11-05 11:34:11.941913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.945021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.945213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:12.894 [2024-11-05 11:34:11.945233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.077 ms 00:18:12.894 [2024-11-05 11:34:11.945243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.945365] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:12.894 [2024-11-05 11:34:11.946108] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:12.894 [2024-11-05 11:34:11.946135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.946145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:12.894 [2024-11-05 11:34:11.946155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.781 ms 00:18:12.894 [2024-11-05 11:34:11.946164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.948350] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:12.894 [2024-11-05 11:34:11.962732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.962787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:12.894 [2024-11-05 11:34:11.962829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.385 ms 00:18:12.894 [2024-11-05 11:34:11.962839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.962985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.963000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:12.894 [2024-11-05 11:34:11.963009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:12.894 [2024-11-05 11:34:11.963017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.971833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.971880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:12.894 [2024-11-05 11:34:11.971891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.768 ms 00:18:12.894 [2024-11-05 11:34:11.971899] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.972010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.972020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:12.894 [2024-11-05 11:34:11.972030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:18:12.894 [2024-11-05 11:34:11.972039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.972069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.972078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:12.894 [2024-11-05 11:34:11.972089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:12.894 [2024-11-05 11:34:11.972097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.972121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:12.894 [2024-11-05 11:34:11.976191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.976235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:12.894 [2024-11-05 11:34:11.976247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.078 ms 00:18:12.894 [2024-11-05 11:34:11.976255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.976331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.976342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:12.894 [2024-11-05 11:34:11.976353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:12.894 [2024-11-05 11:34:11.976362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.976383] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:12.894 [2024-11-05 11:34:11.976406] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:12.894 [2024-11-05 11:34:11.976446] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:12.894 [2024-11-05 11:34:11.976463] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:12.894 [2024-11-05 11:34:11.976570] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:12.894 [2024-11-05 11:34:11.976583] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:12.894 [2024-11-05 11:34:11.976594] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:12.894 [2024-11-05 11:34:11.976604] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:12.894 [2024-11-05 11:34:11.976613] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:12.894 [2024-11-05 11:34:11.976625] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:12.894 [2024-11-05 11:34:11.976634] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:12.894 [2024-11-05 11:34:11.976642] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:12.894 [2024-11-05 11:34:11.976650] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:12.894 [2024-11-05 11:34:11.976658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.976666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:12.894 [2024-11-05 11:34:11.976673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:18:12.894 [2024-11-05 11:34:11.976681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.976768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.894 [2024-11-05 11:34:11.976778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:12.894 [2024-11-05 11:34:11.976786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:12.894 [2024-11-05 11:34:11.976796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.894 [2024-11-05 11:34:11.976913] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:12.894 [2024-11-05 11:34:11.976924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:12.894 [2024-11-05 11:34:11.976934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.894 [2024-11-05 11:34:11.976943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.894 [2024-11-05 11:34:11.976951] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:12.895 [2024-11-05 11:34:11.976958] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.976965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:12.895 [2024-11-05 11:34:11.976972] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:12.895 [2024-11-05 11:34:11.976981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:12.895 [2024-11-05 11:34:11.976987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.895 [2024-11-05 11:34:11.976995] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:12.895 [2024-11-05 11:34:11.977002] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:12.895 [2024-11-05 11:34:11.977009] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:12.895 [2024-11-05 11:34:11.977024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:12.895 [2024-11-05 11:34:11.977032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:12.895 [2024-11-05 11:34:11.977041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:12.895 [2024-11-05 11:34:11.977057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:12.895 [2024-11-05 11:34:11.977080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:12.895 [2024-11-05 11:34:11.977102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:12.895 [2024-11-05 11:34:11.977123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:12.895 [2024-11-05 11:34:11.977144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:12.895 [2024-11-05 11:34:11.977165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.895 [2024-11-05 11:34:11.977178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:12.895 [2024-11-05 11:34:11.977185] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:12.895 [2024-11-05 11:34:11.977192] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:12.895 [2024-11-05 11:34:11.977199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:12.895 [2024-11-05 11:34:11.977206] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:12.895 [2024-11-05 11:34:11.977213] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:12.895 [2024-11-05 11:34:11.977226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:12.895 [2024-11-05 11:34:11.977232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977239] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:12.895 [2024-11-05 11:34:11.977248] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:12.895 [2024-11-05 11:34:11.977256] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977263] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:12.895 [2024-11-05 11:34:11.977278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:12.895 [2024-11-05 11:34:11.977286] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:12.895 [2024-11-05 11:34:11.977293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:12.895 [2024-11-05 11:34:11.977300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:12.895 [2024-11-05 11:34:11.977307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:12.895 [2024-11-05 11:34:11.977313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:12.895 [2024-11-05 11:34:11.977323] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:12.895 [2024-11-05 11:34:11.977333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:12.895 [2024-11-05 11:34:11.977350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:12.895 [2024-11-05 11:34:11.977357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:12.895 [2024-11-05 11:34:11.977364] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:12.895 [2024-11-05 11:34:11.977372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:12.895 [2024-11-05 11:34:11.977379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:12.895 [2024-11-05 11:34:11.977386] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:12.895 [2024-11-05 11:34:11.977394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:12.895 [2024-11-05 11:34:11.977401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:12.895 [2024-11-05 11:34:11.977409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977416] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:12.895 [2024-11-05 11:34:11.977443] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:12.895 [2024-11-05 11:34:11.977452] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977460] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:12.895 [2024-11-05 11:34:11.977468] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:12.895 [2024-11-05 11:34:11.977475] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:12.895 [2024-11-05 11:34:11.977482] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:12.895 [2024-11-05 11:34:11.977490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:11.977499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:12.895 [2024-11-05 11:34:11.977507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:18:12.895 [2024-11-05 11:34:11.977516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.010029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.010238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:12.895 [2024-11-05 11:34:12.010258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.457 ms 00:18:12.895 [2024-11-05 11:34:12.010268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.010412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.010424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:12.895 [2024-11-05 11:34:12.010441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:18:12.895 [2024-11-05 11:34:12.010449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.061916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.061972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:12.895 [2024-11-05 11:34:12.061985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.442 ms 00:18:12.895 [2024-11-05 11:34:12.061995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.062121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.062134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:12.895 [2024-11-05 11:34:12.062145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:12.895 [2024-11-05 11:34:12.062153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.062692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.062725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:12.895 [2024-11-05 11:34:12.062736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:18:12.895 [2024-11-05 11:34:12.062744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.062924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.062935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:12.895 [2024-11-05 11:34:12.062944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:18:12.895 [2024-11-05 11:34:12.062951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.895 [2024-11-05 11:34:12.079282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.895 [2024-11-05 11:34:12.079326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:12.895 [2024-11-05 11:34:12.079336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.307 ms 00:18:12.896 [2024-11-05 11:34:12.079345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-05 11:34:12.093862] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:18:12.896 [2024-11-05 11:34:12.093914] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:12.896 [2024-11-05 11:34:12.093929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-05 11:34:12.093938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:12.896 [2024-11-05 11:34:12.093948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.462 ms 00:18:12.896 [2024-11-05 11:34:12.093956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-05 11:34:12.119506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-05 11:34:12.119567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:12.896 [2024-11-05 11:34:12.119580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.448 ms 00:18:12.896 [2024-11-05 11:34:12.119589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-05 11:34:12.132182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-05 11:34:12.132229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:12.896 [2024-11-05 11:34:12.132241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.492 ms 00:18:12.896 [2024-11-05 11:34:12.132249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-05 11:34:12.144919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-05 11:34:12.145117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:12.896 [2024-11-05 11:34:12.145138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.576 ms 00:18:12.896 [2024-11-05 11:34:12.145146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:12.896 [2024-11-05 11:34:12.145841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:12.896 [2024-11-05 11:34:12.145870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:12.896 [2024-11-05 11:34:12.145881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:18:12.896 [2024-11-05 11:34:12.145889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.212372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.212601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:13.158 [2024-11-05 11:34:12.212627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.451 ms 00:18:13.158 [2024-11-05 11:34:12.212637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.224020] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:13.158 [2024-11-05 11:34:12.243537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.243745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:13.158 [2024-11-05 11:34:12.243767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.664 ms 00:18:13.158 [2024-11-05 11:34:12.243776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.243899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.243916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:13.158 [2024-11-05 11:34:12.243927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:13.158 [2024-11-05 11:34:12.243935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.243994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.244005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:13.158 [2024-11-05 11:34:12.244013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:13.158 [2024-11-05 11:34:12.244022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.244050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.244059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:13.158 [2024-11-05 11:34:12.244071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:13.158 [2024-11-05 11:34:12.244079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.244119] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:13.158 [2024-11-05 11:34:12.244131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.244139] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:13.158 [2024-11-05 11:34:12.244148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:13.158 [2024-11-05 11:34:12.244157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.270288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.270493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:13.158 [2024-11-05 11:34:12.270518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.109 ms 00:18:13.158 [2024-11-05 11:34:12.270541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.270951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.158 [2024-11-05 11:34:12.270979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:13.158 [2024-11-05 11:34:12.270991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:18:13.158 [2024-11-05 11:34:12.271000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.158 [2024-11-05 11:34:12.272666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.158 [2024-11-05 11:34:12.276142] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 330.707 ms, result 0 00:18:13.158 [2024-11-05 11:34:12.277494] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.158 [2024-11-05 11:34:12.291441] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:13.420  [2024-11-05T11:34:12.694Z] Copying: 4096/4096 [kB] (average 10 MBps)[2024-11-05 11:34:12.693677] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:13.683 [2024-11-05 11:34:12.702757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.702963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:13.684 [2024-11-05 11:34:12.703147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:13.684 [2024-11-05 11:34:12.703190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.703239] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:13.684 [2024-11-05 11:34:12.706325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.706499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:13.684 [2024-11-05 11:34:12.706790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.920 ms 00:18:13.684 [2024-11-05 11:34:12.706851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.709792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.709964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:13.684 [2024-11-05 11:34:12.710036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.891 ms 00:18:13.684 [2024-11-05 11:34:12.710059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.714484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.714638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:13.684 [2024-11-05 11:34:12.714712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.391 ms 00:18:13.684 [2024-11-05 11:34:12.714737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.721982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.722141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:13.684 [2024-11-05 11:34:12.722197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.902 ms 00:18:13.684 [2024-11-05 11:34:12.722220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.747405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.747577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:13.684 [2024-11-05 11:34:12.747638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.118 ms 00:18:13.684 [2024-11-05 11:34:12.747660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.764593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.764767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:13.684 [2024-11-05 11:34:12.764875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 00:18:13.684 [2024-11-05 11:34:12.764905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.765060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.765087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:13.684 [2024-11-05 11:34:12.765108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:18:13.684 [2024-11-05 11:34:12.765178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.791216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.791385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:13.684 [2024-11-05 11:34:12.791442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.990 ms 00:18:13.684 [2024-11-05 11:34:12.791465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.816752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.816945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:13.684 [2024-11-05 11:34:12.817001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.164 ms 00:18:13.684 [2024-11-05 11:34:12.817022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.842381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.842565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:13.684 [2024-11-05 11:34:12.842626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.208 ms 00:18:13.684 [2024-11-05 11:34:12.842648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.867418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.684 [2024-11-05 11:34:12.867590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:13.684 [2024-11-05 11:34:12.867647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.651 ms 00:18:13.684 [2024-11-05 11:34:12.867656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.684 [2024-11-05 11:34:12.867743] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:13.684 [2024-11-05 11:34:12.867768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.867996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:13.684 [2024-11-05 11:34:12.868157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:13.685 [2024-11-05 11:34:12.868574] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:13.685 [2024-11-05 11:34:12.868583] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:18:13.685 [2024-11-05 11:34:12.868591] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:13.685 [2024-11-05 11:34:12.868598] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:13.685 [2024-11-05 11:34:12.868606] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:13.685 [2024-11-05 11:34:12.868614] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:13.685 [2024-11-05 11:34:12.868621] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:13.685 [2024-11-05 11:34:12.868630] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:13.685 [2024-11-05 11:34:12.868638] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:13.685 [2024-11-05 11:34:12.868644] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:13.685 [2024-11-05 11:34:12.868650] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:13.685 [2024-11-05 11:34:12.868658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-05 11:34:12.868666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:13.685 [2024-11-05 11:34:12.868678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.915 ms 00:18:13.685 [2024-11-05 11:34:12.868685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.882176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-05 11:34:12.882338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:13.685 [2024-11-05 11:34:12.882392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.456 ms 00:18:13.685 [2024-11-05 11:34:12.882415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.882888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:13.685 [2024-11-05 11:34:12.882945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:13.685 [2024-11-05 11:34:12.883028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:18:13.685 [2024-11-05 11:34:12.883051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.922103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.685 [2024-11-05 11:34:12.922275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:13.685 [2024-11-05 11:34:12.922335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.685 [2024-11-05 11:34:12.922358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.922450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.685 [2024-11-05 11:34:12.922481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:13.685 [2024-11-05 11:34:12.922501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.685 [2024-11-05 11:34:12.922546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.922616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.685 [2024-11-05 11:34:12.922639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:13.685 [2024-11-05 11:34:12.922737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.685 [2024-11-05 11:34:12.922760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.685 [2024-11-05 11:34:12.922794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.685 [2024-11-05 11:34:12.922839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:13.685 [2024-11-05 11:34:12.922866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.686 [2024-11-05 11:34:12.922885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.008857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.009047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:13.948 [2024-11-05 11:34:13.009110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.009133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:13.948 [2024-11-05 11:34:13.079273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:13.948 [2024-11-05 11:34:13.079364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:13.948 [2024-11-05 11:34:13.079422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:13.948 [2024-11-05 11:34:13.079565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:13.948 [2024-11-05 11:34:13.079628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:13.948 [2024-11-05 11:34:13.079704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:13.948 [2024-11-05 11:34:13.079773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:13.948 [2024-11-05 11:34:13.079782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:13.948 [2024-11-05 11:34:13.079793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:13.948 [2024-11-05 11:34:13.079992] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.216 ms, result 0 00:18:14.888 00:18:14.888 00:18:14.888 11:34:13 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74031 00:18:14.888 11:34:13 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74031 00:18:14.888 11:34:13 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 74031 ']' 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:14.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:14.888 11:34:13 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:14.888 [2024-11-05 11:34:13.930391] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:14.888 [2024-11-05 11:34:13.930549] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74031 ] 00:18:14.888 [2024-11-05 11:34:14.094076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.150 [2024-11-05 11:34:14.213476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.722 11:34:14 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:15.722 11:34:14 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:15.722 11:34:14 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:18:15.985 [2024-11-05 11:34:15.109958] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:15.985 [2024-11-05 11:34:15.110036] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:16.248 [2024-11-05 11:34:15.288063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.288297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:16.248 [2024-11-05 11:34:15.288327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:16.248 [2024-11-05 11:34:15.288337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.294646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.294753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:16.248 [2024-11-05 11:34:15.294789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.273 ms 00:18:16.248 [2024-11-05 11:34:15.294850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.295184] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:16.248 [2024-11-05 11:34:15.297281] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:16.248 [2024-11-05 11:34:15.297357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.297380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:16.248 [2024-11-05 11:34:15.297408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.203 ms 00:18:16.248 [2024-11-05 11:34:15.297428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.300065] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:16.248 [2024-11-05 11:34:15.315007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.315064] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:16.248 [2024-11-05 11:34:15.315077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.959 ms 00:18:16.248 [2024-11-05 11:34:15.315087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.315197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.315210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:16.248 [2024-11-05 11:34:15.315220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:18:16.248 [2024-11-05 11:34:15.315230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.323122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.323172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:16.248 [2024-11-05 11:34:15.323183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.840 ms 00:18:16.248 [2024-11-05 11:34:15.323192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.323306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.323320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:16.248 [2024-11-05 11:34:15.323328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:18:16.248 [2024-11-05 11:34:15.323338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.323364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.323378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:16.248 [2024-11-05 11:34:15.323387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:16.248 [2024-11-05 11:34:15.323397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.323420] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:16.248 [2024-11-05 11:34:15.327483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.327523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:16.248 [2024-11-05 11:34:15.327535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.065 ms 00:18:16.248 [2024-11-05 11:34:15.327543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.327619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.327629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:16.248 [2024-11-05 11:34:15.327640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:16.248 [2024-11-05 11:34:15.327649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.327672] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:16.248 [2024-11-05 11:34:15.327695] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:16.248 [2024-11-05 11:34:15.327738] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:16.248 [2024-11-05 11:34:15.327754] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:16.248 [2024-11-05 11:34:15.327886] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:16.248 [2024-11-05 11:34:15.327899] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:16.248 [2024-11-05 11:34:15.327913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:16.248 [2024-11-05 11:34:15.327923] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:16.248 [2024-11-05 11:34:15.327937] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:16.248 [2024-11-05 11:34:15.327948] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:16.248 [2024-11-05 11:34:15.327957] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:16.248 [2024-11-05 11:34:15.327965] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:16.248 [2024-11-05 11:34:15.327977] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:16.248 [2024-11-05 11:34:15.327984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.327994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:16.248 [2024-11-05 11:34:15.328002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:18:16.248 [2024-11-05 11:34:15.328011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.248 [2024-11-05 11:34:15.328100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.248 [2024-11-05 11:34:15.328110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:16.248 [2024-11-05 11:34:15.328120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:16.248 [2024-11-05 11:34:15.328130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.328232] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:16.249 [2024-11-05 11:34:15.328244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:16.249 [2024-11-05 11:34:15.328254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:16.249 [2024-11-05 11:34:15.328281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328288] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:16.249 [2024-11-05 11:34:15.328309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.249 [2024-11-05 11:34:15.328324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:16.249 [2024-11-05 11:34:15.328333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:16.249 [2024-11-05 11:34:15.328340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:16.249 [2024-11-05 11:34:15.328349] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:16.249 [2024-11-05 11:34:15.328355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:16.249 [2024-11-05 11:34:15.328364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:16.249 [2024-11-05 11:34:15.328381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328399] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:16.249 [2024-11-05 11:34:15.328412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328428] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:16.249 [2024-11-05 11:34:15.328438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328454] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:16.249 [2024-11-05 11:34:15.328461] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:16.249 [2024-11-05 11:34:15.328485] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328492] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:16.249 [2024-11-05 11:34:15.328509] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.249 [2024-11-05 11:34:15.328524] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:16.249 [2024-11-05 11:34:15.328532] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:16.249 [2024-11-05 11:34:15.328539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:16.249 [2024-11-05 11:34:15.328548] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:16.249 [2024-11-05 11:34:15.328555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:16.249 [2024-11-05 11:34:15.328565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328572] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:16.249 [2024-11-05 11:34:15.328581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:16.249 [2024-11-05 11:34:15.328587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328596] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:16.249 [2024-11-05 11:34:15.328604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:16.249 [2024-11-05 11:34:15.328614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328624] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:16.249 [2024-11-05 11:34:15.328634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:16.249 [2024-11-05 11:34:15.328641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:16.249 [2024-11-05 11:34:15.328650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:16.249 [2024-11-05 11:34:15.328658] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:16.249 [2024-11-05 11:34:15.328667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:16.249 [2024-11-05 11:34:15.328674] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:16.249 [2024-11-05 11:34:15.328684] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:16.249 [2024-11-05 11:34:15.328694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:16.249 [2024-11-05 11:34:15.328716] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:16.249 [2024-11-05 11:34:15.328725] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:16.249 [2024-11-05 11:34:15.328733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:16.249 [2024-11-05 11:34:15.328743] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:16.249 [2024-11-05 11:34:15.328750] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:16.249 [2024-11-05 11:34:15.328760] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:16.249 [2024-11-05 11:34:15.328767] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:16.249 [2024-11-05 11:34:15.328776] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:16.249 [2024-11-05 11:34:15.328784] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328824] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:16.249 [2024-11-05 11:34:15.328841] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:16.249 [2024-11-05 11:34:15.328849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:16.249 [2024-11-05 11:34:15.328869] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:16.249 [2024-11-05 11:34:15.328878] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:16.249 [2024-11-05 11:34:15.328885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:16.249 [2024-11-05 11:34:15.328895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.328903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:16.249 [2024-11-05 11:34:15.328914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.730 ms 00:18:16.249 [2024-11-05 11:34:15.328921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.360488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.360538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:16.249 [2024-11-05 11:34:15.360552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.505 ms 00:18:16.249 [2024-11-05 11:34:15.360560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.360690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.360703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:16.249 [2024-11-05 11:34:15.360714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:18:16.249 [2024-11-05 11:34:15.360722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.395716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.395764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:16.249 [2024-11-05 11:34:15.395780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.967 ms 00:18:16.249 [2024-11-05 11:34:15.395791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.395897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.395909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:16.249 [2024-11-05 11:34:15.395921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:16.249 [2024-11-05 11:34:15.395929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.396457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.249 [2024-11-05 11:34:15.396497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:16.249 [2024-11-05 11:34:15.396511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.501 ms 00:18:16.249 [2024-11-05 11:34:15.396520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.249 [2024-11-05 11:34:15.396662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.396671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:16.250 [2024-11-05 11:34:15.396681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:16.250 [2024-11-05 11:34:15.396689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.414457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.414655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:16.250 [2024-11-05 11:34:15.414677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.742 ms 00:18:16.250 [2024-11-05 11:34:15.414686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.429098] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:16.250 [2024-11-05 11:34:15.429271] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:16.250 [2024-11-05 11:34:15.429295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.429304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:16.250 [2024-11-05 11:34:15.429316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.464 ms 00:18:16.250 [2024-11-05 11:34:15.429323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.454782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.454837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:16.250 [2024-11-05 11:34:15.454852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.372 ms 00:18:16.250 [2024-11-05 11:34:15.454861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.467623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.467667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:16.250 [2024-11-05 11:34:15.467686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.666 ms 00:18:16.250 [2024-11-05 11:34:15.467694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.480418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.480460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:16.250 [2024-11-05 11:34:15.480474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.635 ms 00:18:16.250 [2024-11-05 11:34:15.480481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.250 [2024-11-05 11:34:15.481170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.250 [2024-11-05 11:34:15.481200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:16.250 [2024-11-05 11:34:15.481212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:18:16.250 [2024-11-05 11:34:15.481221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.557979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.558207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:16.512 [2024-11-05 11:34:15.558240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.727 ms 00:18:16.512 [2024-11-05 11:34:15.558250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.570077] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:16.512 [2024-11-05 11:34:15.588948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.589185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:16.512 [2024-11-05 11:34:15.589207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.070 ms 00:18:16.512 [2024-11-05 11:34:15.589220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.589323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.589338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:16.512 [2024-11-05 11:34:15.589347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:16.512 [2024-11-05 11:34:15.589358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.589417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.589429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:16.512 [2024-11-05 11:34:15.589437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:16.512 [2024-11-05 11:34:15.589447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.589476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.589487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:16.512 [2024-11-05 11:34:15.589496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:16.512 [2024-11-05 11:34:15.589509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.589546] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:16.512 [2024-11-05 11:34:15.589561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.589570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:16.512 [2024-11-05 11:34:15.589581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:16.512 [2024-11-05 11:34:15.589591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.615054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.615105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:16.512 [2024-11-05 11:34:15.615120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.432 ms 00:18:16.512 [2024-11-05 11:34:15.615129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.615245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.512 [2024-11-05 11:34:15.615256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:16.512 [2024-11-05 11:34:15.615268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:16.512 [2024-11-05 11:34:15.615276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.512 [2024-11-05 11:34:15.616379] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:16.512 [2024-11-05 11:34:15.619943] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.982 ms, result 0 00:18:16.512 [2024-11-05 11:34:15.622017] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:16.512 Some configs were skipped because the RPC state that can call them passed over. 00:18:16.512 11:34:15 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:18:16.774 [2024-11-05 11:34:15.866217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:16.774 [2024-11-05 11:34:15.866431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:16.774 [2024-11-05 11:34:15.866499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.729 ms 00:18:16.774 [2024-11-05 11:34:15.866553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:16.774 [2024-11-05 11:34:15.866615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.129 ms, result 0 00:18:16.774 true 00:18:16.774 11:34:15 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:18:17.036 [2024-11-05 11:34:16.086337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.036 [2024-11-05 11:34:16.086499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:18:17.036 [2024-11-05 11:34:16.086579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.565 ms 00:18:17.036 [2024-11-05 11:34:16.086605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.036 [2024-11-05 11:34:16.086667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.896 ms, result 0 00:18:17.036 true 00:18:17.036 11:34:16 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74031 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74031 ']' 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74031 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74031 00:18:17.036 killing process with pid 74031 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74031' 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 74031 00:18:17.036 11:34:16 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 74031 00:18:17.608 [2024-11-05 11:34:16.878245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.608 [2024-11-05 11:34:16.878327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:17.608 [2024-11-05 11:34:16.878343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:17.609 [2024-11-05 11:34:16.878353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.609 [2024-11-05 11:34:16.878378] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:17.609 [2024-11-05 11:34:16.881442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.609 [2024-11-05 11:34:16.881635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:17.609 [2024-11-05 11:34:16.881669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.043 ms 00:18:17.609 [2024-11-05 11:34:16.881678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.609 [2024-11-05 11:34:16.881992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.609 [2024-11-05 11:34:16.882004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:17.609 [2024-11-05 11:34:16.882015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:18:17.609 [2024-11-05 11:34:16.882023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.886670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.886711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:17.871 [2024-11-05 11:34:16.886724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.624 ms 00:18:17.871 [2024-11-05 11:34:16.886735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.893754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.893798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:17.871 [2024-11-05 11:34:16.893833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.971 ms 00:18:17.871 [2024-11-05 11:34:16.893841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.904619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.904797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:17.871 [2024-11-05 11:34:16.904837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.711 ms 00:18:17.871 [2024-11-05 11:34:16.904852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.913792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.913849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:17.871 [2024-11-05 11:34:16.913864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.887 ms 00:18:17.871 [2024-11-05 11:34:16.913874] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.914028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.914039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:17.871 [2024-11-05 11:34:16.914051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:17.871 [2024-11-05 11:34:16.914058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.925514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.925560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:17.871 [2024-11-05 11:34:16.925573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.430 ms 00:18:17.871 [2024-11-05 11:34:16.925580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.934185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.934224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:17.871 [2024-11-05 11:34:16.934238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.552 ms 00:18:17.871 [2024-11-05 11:34:16.934243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.941669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.941707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:17.871 [2024-11-05 11:34:16.941717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.375 ms 00:18:17.871 [2024-11-05 11:34:16.941723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.949087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.871 [2024-11-05 11:34:16.949126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:17.871 [2024-11-05 11:34:16.949136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.298 ms 00:18:17.871 [2024-11-05 11:34:16.949142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.871 [2024-11-05 11:34:16.949180] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:17.871 [2024-11-05 11:34:16.949192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:17.871 [2024-11-05 11:34:16.949202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:17.871 [2024-11-05 11:34:16.949208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:17.871 [2024-11-05 11:34:16.949216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:17.872 [2024-11-05 11:34:16.949860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:17.873 [2024-11-05 11:34:16.949917] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:17.873 [2024-11-05 11:34:16.949927] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:18:17.873 [2024-11-05 11:34:16.949939] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:17.873 [2024-11-05 11:34:16.949948] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:17.873 [2024-11-05 11:34:16.949957] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:17.873 [2024-11-05 11:34:16.949966] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:17.873 [2024-11-05 11:34:16.949972] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:17.873 [2024-11-05 11:34:16.949980] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:17.873 [2024-11-05 11:34:16.949986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:17.873 [2024-11-05 11:34:16.949993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:17.873 [2024-11-05 11:34:16.949998] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:17.873 [2024-11-05 11:34:16.950005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.873 [2024-11-05 11:34:16.950013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:17.873 [2024-11-05 11:34:16.950022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.827 ms 00:18:17.873 [2024-11-05 11:34:16.950031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.960422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.873 [2024-11-05 11:34:16.960455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:17.873 [2024-11-05 11:34:16.960469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.359 ms 00:18:17.873 [2024-11-05 11:34:16.960475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.960786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:17.873 [2024-11-05 11:34:16.960794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:17.873 [2024-11-05 11:34:16.960822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:18:17.873 [2024-11-05 11:34:16.960829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.997448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:16.997479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:17.873 [2024-11-05 11:34:16.997490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:16.997496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.997577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:16.997585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:17.873 [2024-11-05 11:34:16.997593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:16.997599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.997636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:16.997643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:17.873 [2024-11-05 11:34:16.997653] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:16.997658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:16.997673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:16.997679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:17.873 [2024-11-05 11:34:16.997687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:16.997693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.057010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.057042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:17.873 [2024-11-05 11:34:17.057051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.057057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:17.873 [2024-11-05 11:34:17.105226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:17.873 [2024-11-05 11:34:17.105310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:17.873 [2024-11-05 11:34:17.105352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:17.873 [2024-11-05 11:34:17.105442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:17.873 [2024-11-05 11:34:17.105486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:17.873 [2024-11-05 11:34:17.105537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:17.873 [2024-11-05 11:34:17.105584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:17.873 [2024-11-05 11:34:17.105591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:17.873 [2024-11-05 11:34:17.105596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:17.873 [2024-11-05 11:34:17.105700] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 227.445 ms, result 0 00:18:18.447 11:34:17 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:18.447 [2024-11-05 11:34:17.672002] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:18.447 [2024-11-05 11:34:17.672536] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74084 ] 00:18:18.708 [2024-11-05 11:34:17.828357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.708 [2024-11-05 11:34:17.903703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.970 [2024-11-05 11:34:18.108457] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:18.970 [2024-11-05 11:34:18.108506] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:19.233 [2024-11-05 11:34:18.256087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.256122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:19.233 [2024-11-05 11:34:18.256133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.233 [2024-11-05 11:34:18.256139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.258164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.258330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:19.233 [2024-11-05 11:34:18.258343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.013 ms 00:18:19.233 [2024-11-05 11:34:18.258349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.258436] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:19.233 [2024-11-05 11:34:18.258965] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:19.233 [2024-11-05 11:34:18.258983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.258990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:19.233 [2024-11-05 11:34:18.258997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:18:19.233 [2024-11-05 11:34:18.259002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.260150] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:19.233 [2024-11-05 11:34:18.269799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.269830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:19.233 [2024-11-05 11:34:18.269842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.649 ms 00:18:19.233 [2024-11-05 11:34:18.269849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.269910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.269920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:19.233 [2024-11-05 11:34:18.269926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:18:19.233 [2024-11-05 11:34:18.269932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.274212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.274239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:19.233 [2024-11-05 11:34:18.274246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.252 ms 00:18:19.233 [2024-11-05 11:34:18.274251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.274323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.274330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:19.233 [2024-11-05 11:34:18.274339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:18:19.233 [2024-11-05 11:34:18.274345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.274360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.274366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:19.233 [2024-11-05 11:34:18.274373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:19.233 [2024-11-05 11:34:18.274379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.274395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:19.233 [2024-11-05 11:34:18.277064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.277170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:19.233 [2024-11-05 11:34:18.277182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.673 ms 00:18:19.233 [2024-11-05 11:34:18.277188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.277225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.233 [2024-11-05 11:34:18.277232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:19.233 [2024-11-05 11:34:18.277238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:19.233 [2024-11-05 11:34:18.277244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.233 [2024-11-05 11:34:18.277257] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:19.233 [2024-11-05 11:34:18.277272] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:19.233 [2024-11-05 11:34:18.277299] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:19.233 [2024-11-05 11:34:18.277311] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:19.233 [2024-11-05 11:34:18.277389] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:19.233 [2024-11-05 11:34:18.277397] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:19.234 [2024-11-05 11:34:18.277404] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:19.234 [2024-11-05 11:34:18.277411] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277418] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277426] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:19.234 [2024-11-05 11:34:18.277431] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:19.234 [2024-11-05 11:34:18.277437] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:19.234 [2024-11-05 11:34:18.277442] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:19.234 [2024-11-05 11:34:18.277449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.234 [2024-11-05 11:34:18.277454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:19.234 [2024-11-05 11:34:18.277460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:18:19.234 [2024-11-05 11:34:18.277465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.234 [2024-11-05 11:34:18.277531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.234 [2024-11-05 11:34:18.277537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:19.234 [2024-11-05 11:34:18.277543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:18:19.234 [2024-11-05 11:34:18.277550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.234 [2024-11-05 11:34:18.277622] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:19.234 [2024-11-05 11:34:18.277629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:19.234 [2024-11-05 11:34:18.277635] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:19.234 [2024-11-05 11:34:18.277652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277657] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:19.234 [2024-11-05 11:34:18.277667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.234 [2024-11-05 11:34:18.277677] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:19.234 [2024-11-05 11:34:18.277682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:19.234 [2024-11-05 11:34:18.277687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:19.234 [2024-11-05 11:34:18.277696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:19.234 [2024-11-05 11:34:18.277701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:19.234 [2024-11-05 11:34:18.277706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:19.234 [2024-11-05 11:34:18.277716] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277721] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:19.234 [2024-11-05 11:34:18.277732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:19.234 [2024-11-05 11:34:18.277747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:19.234 [2024-11-05 11:34:18.277761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277771] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:19.234 [2024-11-05 11:34:18.277776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277781] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:19.234 [2024-11-05 11:34:18.277790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277795] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.234 [2024-11-05 11:34:18.277814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:19.234 [2024-11-05 11:34:18.277820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:19.234 [2024-11-05 11:34:18.277824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:19.234 [2024-11-05 11:34:18.277829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:19.234 [2024-11-05 11:34:18.277834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:19.234 [2024-11-05 11:34:18.277839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:19.234 [2024-11-05 11:34:18.277848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:19.234 [2024-11-05 11:34:18.277853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277859] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:19.234 [2024-11-05 11:34:18.277865] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:19.234 [2024-11-05 11:34:18.277870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:19.234 [2024-11-05 11:34:18.277883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:19.234 [2024-11-05 11:34:18.277888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:19.234 [2024-11-05 11:34:18.277893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:19.234 [2024-11-05 11:34:18.277898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:19.234 [2024-11-05 11:34:18.277904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:19.234 [2024-11-05 11:34:18.277909] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:19.234 [2024-11-05 11:34:18.277915] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:19.234 [2024-11-05 11:34:18.277922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.277928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:19.234 [2024-11-05 11:34:18.277934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:19.234 [2024-11-05 11:34:18.277939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:19.234 [2024-11-05 11:34:18.277944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:19.234 [2024-11-05 11:34:18.277949] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:19.234 [2024-11-05 11:34:18.277955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:19.234 [2024-11-05 11:34:18.277960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:19.234 [2024-11-05 11:34:18.277965] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:19.234 [2024-11-05 11:34:18.277970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:19.234 [2024-11-05 11:34:18.277976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.277981] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.277986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.277992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.277997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:19.234 [2024-11-05 11:34:18.278002] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:19.234 [2024-11-05 11:34:18.278008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.278015] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:19.234 [2024-11-05 11:34:18.278020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:19.234 [2024-11-05 11:34:18.278025] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:19.234 [2024-11-05 11:34:18.278031] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:19.234 [2024-11-05 11:34:18.278036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.234 [2024-11-05 11:34:18.278041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:19.234 [2024-11-05 11:34:18.278047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.466 ms 00:18:19.234 [2024-11-05 11:34:18.278054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.234 [2024-11-05 11:34:18.298766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.234 [2024-11-05 11:34:18.298793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:19.234 [2024-11-05 11:34:18.298813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.674 ms 00:18:19.234 [2024-11-05 11:34:18.298819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.234 [2024-11-05 11:34:18.298911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.234 [2024-11-05 11:34:18.298919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:19.235 [2024-11-05 11:34:18.298928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:18:19.235 [2024-11-05 11:34:18.298934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.343712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.343836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:19.235 [2024-11-05 11:34:18.343850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.762 ms 00:18:19.235 [2024-11-05 11:34:18.343856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.343918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.343927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:19.235 [2024-11-05 11:34:18.343934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:19.235 [2024-11-05 11:34:18.343939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.344215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.344227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:19.235 [2024-11-05 11:34:18.344234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:19.235 [2024-11-05 11:34:18.344245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.344347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.344355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:19.235 [2024-11-05 11:34:18.344362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:18:19.235 [2024-11-05 11:34:18.344368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.355066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.355159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:19.235 [2024-11-05 11:34:18.355171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.683 ms 00:18:19.235 [2024-11-05 11:34:18.355177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.364856] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:19.235 [2024-11-05 11:34:18.364883] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:19.235 [2024-11-05 11:34:18.364893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.364899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:19.235 [2024-11-05 11:34:18.364905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.624 ms 00:18:19.235 [2024-11-05 11:34:18.364911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.383416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.383448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:19.235 [2024-11-05 11:34:18.383456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.460 ms 00:18:19.235 [2024-11-05 11:34:18.383462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.392142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.392167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:19.235 [2024-11-05 11:34:18.392174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.628 ms 00:18:19.235 [2024-11-05 11:34:18.392180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.400686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.400709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:19.235 [2024-11-05 11:34:18.400716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.467 ms 00:18:19.235 [2024-11-05 11:34:18.400721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.401190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.401207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:19.235 [2024-11-05 11:34:18.401214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.411 ms 00:18:19.235 [2024-11-05 11:34:18.401219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.444299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.444338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:19.235 [2024-11-05 11:34:18.444349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.062 ms 00:18:19.235 [2024-11-05 11:34:18.444356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.452014] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:19.235 [2024-11-05 11:34:18.463234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.463262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:19.235 [2024-11-05 11:34:18.463272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.816 ms 00:18:19.235 [2024-11-05 11:34:18.463278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.463347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.463356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:19.235 [2024-11-05 11:34:18.463364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:19.235 [2024-11-05 11:34:18.463369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.463406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.463413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:19.235 [2024-11-05 11:34:18.463419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:19.235 [2024-11-05 11:34:18.463425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.463446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.463453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:19.235 [2024-11-05 11:34:18.463465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:19.235 [2024-11-05 11:34:18.463471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.463494] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:19.235 [2024-11-05 11:34:18.463501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.463507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:19.235 [2024-11-05 11:34:18.463513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:19.235 [2024-11-05 11:34:18.463519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.481284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.481315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:19.235 [2024-11-05 11:34:18.481324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.751 ms 00:18:19.235 [2024-11-05 11:34:18.481330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.481399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:19.235 [2024-11-05 11:34:18.481408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:19.235 [2024-11-05 11:34:18.481415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:19.235 [2024-11-05 11:34:18.481420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:19.235 [2024-11-05 11:34:18.482034] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:19.235 [2024-11-05 11:34:18.484413] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.711 ms, result 0 00:18:19.235 [2024-11-05 11:34:18.485081] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:19.235 [2024-11-05 11:34:18.499882] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:20.620  [2024-11-05T11:34:20.835Z] Copying: 24/256 [MB] (24 MBps) [2024-11-05T11:34:21.779Z] Copying: 38/256 [MB] (13 MBps) [2024-11-05T11:34:22.721Z] Copying: 54/256 [MB] (16 MBps) [2024-11-05T11:34:23.685Z] Copying: 81/256 [MB] (26 MBps) [2024-11-05T11:34:24.631Z] Copying: 95/256 [MB] (14 MBps) [2024-11-05T11:34:25.575Z] Copying: 107/256 [MB] (12 MBps) [2024-11-05T11:34:26.964Z] Copying: 127/256 [MB] (20 MBps) [2024-11-05T11:34:27.910Z] Copying: 145/256 [MB] (17 MBps) [2024-11-05T11:34:28.856Z] Copying: 163/256 [MB] (18 MBps) [2024-11-05T11:34:29.808Z] Copying: 186/256 [MB] (22 MBps) [2024-11-05T11:34:30.752Z] Copying: 199/256 [MB] (13 MBps) [2024-11-05T11:34:31.692Z] Copying: 221/256 [MB] (22 MBps) [2024-11-05T11:34:32.698Z] Copying: 237/256 [MB] (15 MBps) [2024-11-05T11:34:32.698Z] Copying: 254/256 [MB] (16 MBps) [2024-11-05T11:34:32.698Z] Copying: 256/256 [MB] (average 18 MBps)[2024-11-05 11:34:32.612763] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:33.424 [2024-11-05 11:34:32.623577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.623631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:33.424 [2024-11-05 11:34:32.623645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:33.424 [2024-11-05 11:34:32.623654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.623678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:33.424 [2024-11-05 11:34:32.626681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.626732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:33.424 [2024-11-05 11:34:32.626743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.987 ms 00:18:33.424 [2024-11-05 11:34:32.626752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.627031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.627041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:33.424 [2024-11-05 11:34:32.627051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:18:33.424 [2024-11-05 11:34:32.627059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.631261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.631295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:33.424 [2024-11-05 11:34:32.631311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:18:33.424 [2024-11-05 11:34:32.631320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.639198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.639246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:33.424 [2024-11-05 11:34:32.639257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.853 ms 00:18:33.424 [2024-11-05 11:34:32.639265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.666901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.666953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:33.424 [2024-11-05 11:34:32.666966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.566 ms 00:18:33.424 [2024-11-05 11:34:32.666974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.683025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.683071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:33.424 [2024-11-05 11:34:32.683093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.963 ms 00:18:33.424 [2024-11-05 11:34:32.683101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.424 [2024-11-05 11:34:32.683261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.424 [2024-11-05 11:34:32.683272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:33.424 [2024-11-05 11:34:32.683282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:33.424 [2024-11-05 11:34:32.683291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.686 [2024-11-05 11:34:32.709585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.686 [2024-11-05 11:34:32.709787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:33.686 [2024-11-05 11:34:32.709833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.268 ms 00:18:33.686 [2024-11-05 11:34:32.709841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.686 [2024-11-05 11:34:32.735076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.686 [2024-11-05 11:34:32.735123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:33.686 [2024-11-05 11:34:32.735134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.176 ms 00:18:33.686 [2024-11-05 11:34:32.735141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.686 [2024-11-05 11:34:32.760021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.686 [2024-11-05 11:34:32.760068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:33.686 [2024-11-05 11:34:32.760080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.831 ms 00:18:33.686 [2024-11-05 11:34:32.760088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.686 [2024-11-05 11:34:32.784622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.686 [2024-11-05 11:34:32.784671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:33.687 [2024-11-05 11:34:32.784683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.454 ms 00:18:33.687 [2024-11-05 11:34:32.784691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.687 [2024-11-05 11:34:32.784739] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:33.687 [2024-11-05 11:34:32.784762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.784999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:33.687 [2024-11-05 11:34:32.785490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:33.688 [2024-11-05 11:34:32.785614] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:33.688 [2024-11-05 11:34:32.785622] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 9a61a43b-8840-4edd-a0ff-ca2f1deb6908 00:18:33.688 [2024-11-05 11:34:32.785631] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:33.688 [2024-11-05 11:34:32.785638] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:33.688 [2024-11-05 11:34:32.785646] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:33.688 [2024-11-05 11:34:32.785654] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:33.688 [2024-11-05 11:34:32.785661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:33.688 [2024-11-05 11:34:32.785670] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:33.688 [2024-11-05 11:34:32.785677] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:33.688 [2024-11-05 11:34:32.785684] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:33.688 [2024-11-05 11:34:32.785691] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:33.688 [2024-11-05 11:34:32.785698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.688 [2024-11-05 11:34:32.785706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:33.688 [2024-11-05 11:34:32.785715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.960 ms 00:18:33.688 [2024-11-05 11:34:32.785726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.798928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.688 [2024-11-05 11:34:32.798969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:33.688 [2024-11-05 11:34:32.798980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.170 ms 00:18:33.688 [2024-11-05 11:34:32.798988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.799386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:33.688 [2024-11-05 11:34:32.799403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:33.688 [2024-11-05 11:34:32.799414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:18:33.688 [2024-11-05 11:34:32.799421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.838300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.688 [2024-11-05 11:34:32.838349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:33.688 [2024-11-05 11:34:32.838360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.688 [2024-11-05 11:34:32.838369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.838458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.688 [2024-11-05 11:34:32.838471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:33.688 [2024-11-05 11:34:32.838480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.688 [2024-11-05 11:34:32.838487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.838538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.688 [2024-11-05 11:34:32.838564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:33.688 [2024-11-05 11:34:32.838572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.688 [2024-11-05 11:34:32.838580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.838600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.688 [2024-11-05 11:34:32.838608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:33.688 [2024-11-05 11:34:32.838619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.688 [2024-11-05 11:34:32.838627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.688 [2024-11-05 11:34:32.921453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.688 [2024-11-05 11:34:32.921509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:33.688 [2024-11-05 11:34:32.921522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.688 [2024-11-05 11:34:32.921530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:33.949 [2024-11-05 11:34:32.990326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:33.949 [2024-11-05 11:34:32.990415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:33.949 [2024-11-05 11:34:32.990474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:33.949 [2024-11-05 11:34:32.990623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:33.949 [2024-11-05 11:34:32.990685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:33.949 [2024-11-05 11:34:32.990758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.949 [2024-11-05 11:34:32.990853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:33.949 [2024-11-05 11:34:32.990865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:33.949 [2024-11-05 11:34:32.990873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:33.949 [2024-11-05 11:34:32.990883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:33.950 [2024-11-05 11:34:32.991045] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 367.456 ms, result 0 00:18:34.523 00:18:34.523 00:18:34.523 11:34:33 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:35.093 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:35.093 11:34:34 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:35.355 11:34:34 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74031 00:18:35.355 11:34:34 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 74031 ']' 00:18:35.355 11:34:34 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 74031 00:18:35.355 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74031) - No such process 00:18:35.355 Process with pid 74031 is not found 00:18:35.355 11:34:34 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 74031 is not found' 00:18:35.355 ************************************ 00:18:35.355 END TEST ftl_trim 00:18:35.355 ************************************ 00:18:35.355 00:18:35.355 real 1m10.630s 00:18:35.355 user 1m27.371s 00:18:35.355 sys 0m14.979s 00:18:35.355 11:34:34 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:35.355 11:34:34 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:35.355 11:34:34 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:35.355 11:34:34 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:18:35.355 11:34:34 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:35.355 11:34:34 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:35.355 ************************************ 00:18:35.355 START TEST ftl_restore 00:18:35.355 ************************************ 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:35.355 * Looking for test storage... 00:18:35.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.355 11:34:34 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:35.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.355 --rc genhtml_branch_coverage=1 00:18:35.355 --rc genhtml_function_coverage=1 00:18:35.355 --rc genhtml_legend=1 00:18:35.355 --rc geninfo_all_blocks=1 00:18:35.355 --rc geninfo_unexecuted_blocks=1 00:18:35.355 00:18:35.355 ' 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:35.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.355 --rc genhtml_branch_coverage=1 00:18:35.355 --rc genhtml_function_coverage=1 00:18:35.355 --rc genhtml_legend=1 00:18:35.355 --rc geninfo_all_blocks=1 00:18:35.355 --rc geninfo_unexecuted_blocks=1 00:18:35.355 00:18:35.355 ' 00:18:35.355 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:35.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.355 --rc genhtml_branch_coverage=1 00:18:35.355 --rc genhtml_function_coverage=1 00:18:35.355 --rc genhtml_legend=1 00:18:35.355 --rc geninfo_all_blocks=1 00:18:35.356 --rc geninfo_unexecuted_blocks=1 00:18:35.356 00:18:35.356 ' 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:35.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.356 --rc genhtml_branch_coverage=1 00:18:35.356 --rc genhtml_function_coverage=1 00:18:35.356 --rc genhtml_legend=1 00:18:35.356 --rc geninfo_all_blocks=1 00:18:35.356 --rc geninfo_unexecuted_blocks=1 00:18:35.356 00:18:35.356 ' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:35.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.HPQWvzbfj3 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74328 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74328 00:18:35.356 11:34:34 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 74328 ']' 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:35.356 11:34:34 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:35.617 [2024-11-05 11:34:34.694105] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:35.617 [2024-11-05 11:34:34.694281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74328 ] 00:18:35.617 [2024-11-05 11:34:34.858767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.878 [2024-11-05 11:34:34.979035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.450 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:36.450 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:36.450 11:34:35 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:36.712 11:34:35 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:36.712 11:34:35 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:36.712 11:34:35 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:36.712 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:36.712 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:36.712 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:36.712 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:36.712 11:34:35 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:36.973 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:36.973 { 00:18:36.973 "name": "nvme0n1", 00:18:36.973 "aliases": [ 00:18:36.973 "dfedba49-5bd9-45c3-b93e-9e3e3892f858" 00:18:36.973 ], 00:18:36.973 "product_name": "NVMe disk", 00:18:36.973 "block_size": 4096, 00:18:36.973 "num_blocks": 1310720, 00:18:36.973 "uuid": "dfedba49-5bd9-45c3-b93e-9e3e3892f858", 00:18:36.973 "numa_id": -1, 00:18:36.973 "assigned_rate_limits": { 00:18:36.973 "rw_ios_per_sec": 0, 00:18:36.973 "rw_mbytes_per_sec": 0, 00:18:36.973 "r_mbytes_per_sec": 0, 00:18:36.973 "w_mbytes_per_sec": 0 00:18:36.973 }, 00:18:36.973 "claimed": true, 00:18:36.973 "claim_type": "read_many_write_one", 00:18:36.973 "zoned": false, 00:18:36.973 "supported_io_types": { 00:18:36.973 "read": true, 00:18:36.973 "write": true, 00:18:36.973 "unmap": true, 00:18:36.973 "flush": true, 00:18:36.973 "reset": true, 00:18:36.973 "nvme_admin": true, 00:18:36.973 "nvme_io": true, 00:18:36.973 "nvme_io_md": false, 00:18:36.973 "write_zeroes": true, 00:18:36.973 "zcopy": false, 00:18:36.973 "get_zone_info": false, 00:18:36.973 "zone_management": false, 00:18:36.973 "zone_append": false, 00:18:36.973 "compare": true, 00:18:36.973 "compare_and_write": false, 00:18:36.973 "abort": true, 00:18:36.973 "seek_hole": false, 00:18:36.973 "seek_data": false, 00:18:36.973 "copy": true, 00:18:36.973 "nvme_iov_md": false 00:18:36.973 }, 00:18:36.973 "driver_specific": { 00:18:36.973 "nvme": [ 00:18:36.973 { 00:18:36.973 "pci_address": "0000:00:11.0", 00:18:36.973 "trid": { 00:18:36.973 "trtype": "PCIe", 00:18:36.973 "traddr": "0000:00:11.0" 00:18:36.973 }, 00:18:36.973 "ctrlr_data": { 00:18:36.973 "cntlid": 0, 00:18:36.973 "vendor_id": "0x1b36", 00:18:36.973 "model_number": "QEMU NVMe Ctrl", 00:18:36.973 "serial_number": "12341", 00:18:36.973 "firmware_revision": "8.0.0", 00:18:36.973 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:36.973 "oacs": { 00:18:36.973 "security": 0, 00:18:36.973 "format": 1, 00:18:36.973 "firmware": 0, 00:18:36.973 "ns_manage": 1 00:18:36.973 }, 00:18:36.973 "multi_ctrlr": false, 00:18:36.973 "ana_reporting": false 00:18:36.973 }, 00:18:36.973 "vs": { 00:18:36.973 "nvme_version": "1.4" 00:18:36.973 }, 00:18:36.973 "ns_data": { 00:18:36.973 "id": 1, 00:18:36.973 "can_share": false 00:18:36.973 } 00:18:36.973 } 00:18:36.973 ], 00:18:36.973 "mp_policy": "active_passive" 00:18:36.973 } 00:18:36.973 } 00:18:36.973 ]' 00:18:36.973 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:36.973 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:36.973 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:37.236 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:37.236 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:37.236 11:34:36 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=48101ea4-76df-4266-9744-b003667d64fc 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:37.236 11:34:36 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 48101ea4-76df-4266-9744-b003667d64fc 00:18:37.497 11:34:36 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:37.759 11:34:36 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a91abc81-b126-4a88-8d9f-e09ab1598e55 00:18:37.759 11:34:36 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a91abc81-b126-4a88-8d9f-e09ab1598e55 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:38.021 11:34:37 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.021 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.021 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:38.021 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:38.021 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:38.021 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.281 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:38.281 { 00:18:38.281 "name": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:38.281 "aliases": [ 00:18:38.281 "lvs/nvme0n1p0" 00:18:38.281 ], 00:18:38.281 "product_name": "Logical Volume", 00:18:38.281 "block_size": 4096, 00:18:38.281 "num_blocks": 26476544, 00:18:38.281 "uuid": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:38.281 "assigned_rate_limits": { 00:18:38.281 "rw_ios_per_sec": 0, 00:18:38.282 "rw_mbytes_per_sec": 0, 00:18:38.282 "r_mbytes_per_sec": 0, 00:18:38.282 "w_mbytes_per_sec": 0 00:18:38.282 }, 00:18:38.282 "claimed": false, 00:18:38.282 "zoned": false, 00:18:38.282 "supported_io_types": { 00:18:38.282 "read": true, 00:18:38.282 "write": true, 00:18:38.282 "unmap": true, 00:18:38.282 "flush": false, 00:18:38.282 "reset": true, 00:18:38.282 "nvme_admin": false, 00:18:38.282 "nvme_io": false, 00:18:38.282 "nvme_io_md": false, 00:18:38.282 "write_zeroes": true, 00:18:38.282 "zcopy": false, 00:18:38.282 "get_zone_info": false, 00:18:38.282 "zone_management": false, 00:18:38.282 "zone_append": false, 00:18:38.282 "compare": false, 00:18:38.282 "compare_and_write": false, 00:18:38.282 "abort": false, 00:18:38.282 "seek_hole": true, 00:18:38.282 "seek_data": true, 00:18:38.282 "copy": false, 00:18:38.282 "nvme_iov_md": false 00:18:38.282 }, 00:18:38.282 "driver_specific": { 00:18:38.282 "lvol": { 00:18:38.282 "lvol_store_uuid": "a91abc81-b126-4a88-8d9f-e09ab1598e55", 00:18:38.282 "base_bdev": "nvme0n1", 00:18:38.282 "thin_provision": true, 00:18:38.282 "num_allocated_clusters": 0, 00:18:38.282 "snapshot": false, 00:18:38.282 "clone": false, 00:18:38.282 "esnap_clone": false 00:18:38.282 } 00:18:38.282 } 00:18:38.282 } 00:18:38.282 ]' 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:38.282 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:38.282 11:34:37 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:38.282 11:34:37 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:38.282 11:34:37 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:38.543 11:34:37 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:38.543 11:34:37 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:38.543 11:34:37 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.543 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.543 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:38.543 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:38.543 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:38.543 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:38.842 { 00:18:38.842 "name": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:38.842 "aliases": [ 00:18:38.842 "lvs/nvme0n1p0" 00:18:38.842 ], 00:18:38.842 "product_name": "Logical Volume", 00:18:38.842 "block_size": 4096, 00:18:38.842 "num_blocks": 26476544, 00:18:38.842 "uuid": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:38.842 "assigned_rate_limits": { 00:18:38.842 "rw_ios_per_sec": 0, 00:18:38.842 "rw_mbytes_per_sec": 0, 00:18:38.842 "r_mbytes_per_sec": 0, 00:18:38.842 "w_mbytes_per_sec": 0 00:18:38.842 }, 00:18:38.842 "claimed": false, 00:18:38.842 "zoned": false, 00:18:38.842 "supported_io_types": { 00:18:38.842 "read": true, 00:18:38.842 "write": true, 00:18:38.842 "unmap": true, 00:18:38.842 "flush": false, 00:18:38.842 "reset": true, 00:18:38.842 "nvme_admin": false, 00:18:38.842 "nvme_io": false, 00:18:38.842 "nvme_io_md": false, 00:18:38.842 "write_zeroes": true, 00:18:38.842 "zcopy": false, 00:18:38.842 "get_zone_info": false, 00:18:38.842 "zone_management": false, 00:18:38.842 "zone_append": false, 00:18:38.842 "compare": false, 00:18:38.842 "compare_and_write": false, 00:18:38.842 "abort": false, 00:18:38.842 "seek_hole": true, 00:18:38.842 "seek_data": true, 00:18:38.842 "copy": false, 00:18:38.842 "nvme_iov_md": false 00:18:38.842 }, 00:18:38.842 "driver_specific": { 00:18:38.842 "lvol": { 00:18:38.842 "lvol_store_uuid": "a91abc81-b126-4a88-8d9f-e09ab1598e55", 00:18:38.842 "base_bdev": "nvme0n1", 00:18:38.842 "thin_provision": true, 00:18:38.842 "num_allocated_clusters": 0, 00:18:38.842 "snapshot": false, 00:18:38.842 "clone": false, 00:18:38.842 "esnap_clone": false 00:18:38.842 } 00:18:38.842 } 00:18:38.842 } 00:18:38.842 ]' 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:38.842 11:34:37 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:38.842 11:34:37 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:38.842 11:34:37 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:39.103 11:34:38 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:39.103 11:34:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 00:18:39.103 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:39.103 { 00:18:39.103 "name": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:39.103 "aliases": [ 00:18:39.103 "lvs/nvme0n1p0" 00:18:39.103 ], 00:18:39.103 "product_name": "Logical Volume", 00:18:39.103 "block_size": 4096, 00:18:39.103 "num_blocks": 26476544, 00:18:39.104 "uuid": "54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4", 00:18:39.104 "assigned_rate_limits": { 00:18:39.104 "rw_ios_per_sec": 0, 00:18:39.104 "rw_mbytes_per_sec": 0, 00:18:39.104 "r_mbytes_per_sec": 0, 00:18:39.104 "w_mbytes_per_sec": 0 00:18:39.104 }, 00:18:39.104 "claimed": false, 00:18:39.104 "zoned": false, 00:18:39.104 "supported_io_types": { 00:18:39.104 "read": true, 00:18:39.104 "write": true, 00:18:39.104 "unmap": true, 00:18:39.104 "flush": false, 00:18:39.104 "reset": true, 00:18:39.104 "nvme_admin": false, 00:18:39.104 "nvme_io": false, 00:18:39.104 "nvme_io_md": false, 00:18:39.104 "write_zeroes": true, 00:18:39.104 "zcopy": false, 00:18:39.104 "get_zone_info": false, 00:18:39.104 "zone_management": false, 00:18:39.104 "zone_append": false, 00:18:39.104 "compare": false, 00:18:39.104 "compare_and_write": false, 00:18:39.104 "abort": false, 00:18:39.104 "seek_hole": true, 00:18:39.104 "seek_data": true, 00:18:39.104 "copy": false, 00:18:39.104 "nvme_iov_md": false 00:18:39.104 }, 00:18:39.104 "driver_specific": { 00:18:39.104 "lvol": { 00:18:39.104 "lvol_store_uuid": "a91abc81-b126-4a88-8d9f-e09ab1598e55", 00:18:39.104 "base_bdev": "nvme0n1", 00:18:39.104 "thin_provision": true, 00:18:39.104 "num_allocated_clusters": 0, 00:18:39.104 "snapshot": false, 00:18:39.104 "clone": false, 00:18:39.104 "esnap_clone": false 00:18:39.104 } 00:18:39.104 } 00:18:39.104 } 00:18:39.104 ]' 00:18:39.104 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:39.366 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:18:39.366 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:39.366 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:39.366 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:39.366 11:34:38 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 --l2p_dram_limit 10' 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:39.366 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:39.366 11:34:38 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 54dc0da2-8c24-4eb7-a629-b7ac1a9a2ba4 --l2p_dram_limit 10 -c nvc0n1p0 00:18:39.366 [2024-11-05 11:34:38.624056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.366 [2024-11-05 11:34:38.624095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:39.366 [2024-11-05 11:34:38.624108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:39.366 [2024-11-05 11:34:38.624115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.366 [2024-11-05 11:34:38.624165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.366 [2024-11-05 11:34:38.624173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:39.366 [2024-11-05 11:34:38.624181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:39.366 [2024-11-05 11:34:38.624187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.366 [2024-11-05 11:34:38.624206] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:39.366 [2024-11-05 11:34:38.624825] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:39.366 [2024-11-05 11:34:38.624852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.366 [2024-11-05 11:34:38.624858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:39.366 [2024-11-05 11:34:38.624866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.650 ms 00:18:39.366 [2024-11-05 11:34:38.624872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.366 [2024-11-05 11:34:38.624928] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:18:39.366 [2024-11-05 11:34:38.625847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.366 [2024-11-05 11:34:38.625875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:39.366 [2024-11-05 11:34:38.625883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:18:39.367 [2024-11-05 11:34:38.625892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.630576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.630604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:39.367 [2024-11-05 11:34:38.630612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:18:39.367 [2024-11-05 11:34:38.630621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.630687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.630696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:39.367 [2024-11-05 11:34:38.630703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:39.367 [2024-11-05 11:34:38.630712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.630743] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.630751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:39.367 [2024-11-05 11:34:38.630758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:39.367 [2024-11-05 11:34:38.630765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.630783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:39.367 [2024-11-05 11:34:38.633668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.633693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:39.367 [2024-11-05 11:34:38.633702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.889 ms 00:18:39.367 [2024-11-05 11:34:38.633711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.633739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.633745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:39.367 [2024-11-05 11:34:38.633753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:39.367 [2024-11-05 11:34:38.633758] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.633778] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:39.367 [2024-11-05 11:34:38.633891] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:39.367 [2024-11-05 11:34:38.633904] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:39.367 [2024-11-05 11:34:38.633912] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:39.367 [2024-11-05 11:34:38.633922] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:39.367 [2024-11-05 11:34:38.633928] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:39.367 [2024-11-05 11:34:38.633936] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:39.367 [2024-11-05 11:34:38.633942] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:39.367 [2024-11-05 11:34:38.633949] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:39.367 [2024-11-05 11:34:38.633954] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:39.367 [2024-11-05 11:34:38.633962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.633968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:39.367 [2024-11-05 11:34:38.633975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.186 ms 00:18:39.367 [2024-11-05 11:34:38.633986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.634051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.367 [2024-11-05 11:34:38.634057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:39.367 [2024-11-05 11:34:38.634064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:18:39.367 [2024-11-05 11:34:38.634069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.367 [2024-11-05 11:34:38.634147] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:39.367 [2024-11-05 11:34:38.634156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:39.367 [2024-11-05 11:34:38.634164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634169] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:39.367 [2024-11-05 11:34:38.634182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:39.367 [2024-11-05 11:34:38.634199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634204] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.367 [2024-11-05 11:34:38.634211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:39.367 [2024-11-05 11:34:38.634215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:39.367 [2024-11-05 11:34:38.634222] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:39.367 [2024-11-05 11:34:38.634227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:39.367 [2024-11-05 11:34:38.634233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:39.367 [2024-11-05 11:34:38.634238] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:39.367 [2024-11-05 11:34:38.634251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:39.367 [2024-11-05 11:34:38.634270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:39.367 [2024-11-05 11:34:38.634285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634296] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:39.367 [2024-11-05 11:34:38.634302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:39.367 [2024-11-05 11:34:38.634318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:39.367 [2024-11-05 11:34:38.634339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.367 [2024-11-05 11:34:38.634351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:39.367 [2024-11-05 11:34:38.634355] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:39.367 [2024-11-05 11:34:38.634361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:39.367 [2024-11-05 11:34:38.634367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:39.367 [2024-11-05 11:34:38.634373] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:39.367 [2024-11-05 11:34:38.634378] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:39.367 [2024-11-05 11:34:38.634389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:39.367 [2024-11-05 11:34:38.634396] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634400] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:39.367 [2024-11-05 11:34:38.634407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:39.367 [2024-11-05 11:34:38.634412] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:39.367 [2024-11-05 11:34:38.634424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:39.367 [2024-11-05 11:34:38.634433] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:39.367 [2024-11-05 11:34:38.634438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:39.367 [2024-11-05 11:34:38.634444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:39.367 [2024-11-05 11:34:38.634449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:39.367 [2024-11-05 11:34:38.634455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:39.367 [2024-11-05 11:34:38.634462] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:39.367 [2024-11-05 11:34:38.634473] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.367 [2024-11-05 11:34:38.634480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:39.367 [2024-11-05 11:34:38.634487] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:39.367 [2024-11-05 11:34:38.634492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:39.367 [2024-11-05 11:34:38.634499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:39.367 [2024-11-05 11:34:38.634504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:39.367 [2024-11-05 11:34:38.634510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:39.367 [2024-11-05 11:34:38.634515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:39.367 [2024-11-05 11:34:38.634523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:39.368 [2024-11-05 11:34:38.634529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:39.368 [2024-11-05 11:34:38.634538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:39.368 [2024-11-05 11:34:38.634576] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:39.368 [2024-11-05 11:34:38.634585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:39.368 [2024-11-05 11:34:38.634600] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:39.368 [2024-11-05 11:34:38.634605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:39.368 [2024-11-05 11:34:38.634612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:39.368 [2024-11-05 11:34:38.634618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:39.368 [2024-11-05 11:34:38.634625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:39.368 [2024-11-05 11:34:38.634630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:18:39.368 [2024-11-05 11:34:38.634637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:39.368 [2024-11-05 11:34:38.634676] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:39.368 [2024-11-05 11:34:38.634687] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:43.574 [2024-11-05 11:34:42.434876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.434985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:43.574 [2024-11-05 11:34:42.435004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3800.183 ms 00:18:43.574 [2024-11-05 11:34:42.435016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.467398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.467471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.574 [2024-11-05 11:34:42.467486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.129 ms 00:18:43.574 [2024-11-05 11:34:42.467497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.467641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.467656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:43.574 [2024-11-05 11:34:42.467666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:43.574 [2024-11-05 11:34:42.467680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.503822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.503872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.574 [2024-11-05 11:34:42.503884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.105 ms 00:18:43.574 [2024-11-05 11:34:42.503896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.503934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.503947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.574 [2024-11-05 11:34:42.503957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:43.574 [2024-11-05 11:34:42.503970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.504577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.504605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.574 [2024-11-05 11:34:42.504616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:18:43.574 [2024-11-05 11:34:42.504626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.504747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.504758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.574 [2024-11-05 11:34:42.504766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:43.574 [2024-11-05 11:34:42.504779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.522689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.522738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.574 [2024-11-05 11:34:42.522751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.888 ms 00:18:43.574 [2024-11-05 11:34:42.522764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.536119] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:43.574 [2024-11-05 11:34:42.540053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.540091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:43.574 [2024-11-05 11:34:42.540105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.160 ms 00:18:43.574 [2024-11-05 11:34:42.540114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.654599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.654677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:43.574 [2024-11-05 11:34:42.654698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 114.441 ms 00:18:43.574 [2024-11-05 11:34:42.654708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.654956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.654970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:43.574 [2024-11-05 11:34:42.654986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:18:43.574 [2024-11-05 11:34:42.654999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.681500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.681554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:43.574 [2024-11-05 11:34:42.681571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.434 ms 00:18:43.574 [2024-11-05 11:34:42.681579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.707500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.707554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:43.574 [2024-11-05 11:34:42.707570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.858 ms 00:18:43.574 [2024-11-05 11:34:42.707578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.708251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.708274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:43.574 [2024-11-05 11:34:42.708286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.619 ms 00:18:43.574 [2024-11-05 11:34:42.708295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.796195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.796259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:43.574 [2024-11-05 11:34:42.796280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.846 ms 00:18:43.574 [2024-11-05 11:34:42.796289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.574 [2024-11-05 11:34:42.824559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.574 [2024-11-05 11:34:42.824616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:43.574 [2024-11-05 11:34:42.824636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.157 ms 00:18:43.574 [2024-11-05 11:34:42.824644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.835 [2024-11-05 11:34:42.851756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.835 [2024-11-05 11:34:42.851828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:43.835 [2024-11-05 11:34:42.851844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.051 ms 00:18:43.835 [2024-11-05 11:34:42.851852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.835 [2024-11-05 11:34:42.879035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.835 [2024-11-05 11:34:42.879091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:43.835 [2024-11-05 11:34:42.879107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.122 ms 00:18:43.835 [2024-11-05 11:34:42.879115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.835 [2024-11-05 11:34:42.879175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.835 [2024-11-05 11:34:42.879185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:43.835 [2024-11-05 11:34:42.879199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:43.835 [2024-11-05 11:34:42.879208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.835 [2024-11-05 11:34:42.879308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.835 [2024-11-05 11:34:42.879319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:43.835 [2024-11-05 11:34:42.879330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:18:43.835 [2024-11-05 11:34:42.879337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.835 [2024-11-05 11:34:42.880645] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4256.077 ms, result 0 00:18:43.835 { 00:18:43.835 "name": "ftl0", 00:18:43.835 "uuid": "7108cdfb-1a51-4b6f-821b-c2680d6e4cf0" 00:18:43.835 } 00:18:43.835 11:34:42 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:43.835 11:34:42 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:44.095 11:34:43 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:44.095 11:34:43 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:44.095 [2024-11-05 11:34:43.307908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.307980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:44.095 [2024-11-05 11:34:43.307997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:44.095 [2024-11-05 11:34:43.308018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.308044] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:44.095 [2024-11-05 11:34:43.311194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.311243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:44.095 [2024-11-05 11:34:43.311261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.124 ms 00:18:44.095 [2024-11-05 11:34:43.311271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.311570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.311591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:44.095 [2024-11-05 11:34:43.311603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.264 ms 00:18:44.095 [2024-11-05 11:34:43.311616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.314893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.314921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:44.095 [2024-11-05 11:34:43.314933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.257 ms 00:18:44.095 [2024-11-05 11:34:43.314941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.321245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.321293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:44.095 [2024-11-05 11:34:43.321308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:18:44.095 [2024-11-05 11:34:43.321316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.348298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.348356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:44.095 [2024-11-05 11:34:43.348372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.900 ms 00:18:44.095 [2024-11-05 11:34:43.348380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.367176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.367235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:44.095 [2024-11-05 11:34:43.367250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.735 ms 00:18:44.095 [2024-11-05 11:34:43.367258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.095 [2024-11-05 11:34:43.367440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.095 [2024-11-05 11:34:43.367454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:44.095 [2024-11-05 11:34:43.367466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:44.095 [2024-11-05 11:34:43.367475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.366 [2024-11-05 11:34:43.394111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.366 [2024-11-05 11:34:43.394166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:44.366 [2024-11-05 11:34:43.394181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.614 ms 00:18:44.366 [2024-11-05 11:34:43.394189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.366 [2024-11-05 11:34:43.419921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.366 [2024-11-05 11:34:43.419975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:44.366 [2024-11-05 11:34:43.419991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.675 ms 00:18:44.366 [2024-11-05 11:34:43.419998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.366 [2024-11-05 11:34:43.445787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.366 [2024-11-05 11:34:43.445849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:44.366 [2024-11-05 11:34:43.445865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.731 ms 00:18:44.366 [2024-11-05 11:34:43.445872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.366 [2024-11-05 11:34:43.471717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.366 [2024-11-05 11:34:43.471771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:44.366 [2024-11-05 11:34:43.471787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.724 ms 00:18:44.366 [2024-11-05 11:34:43.471794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.366 [2024-11-05 11:34:43.471858] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:44.366 [2024-11-05 11:34:43.471876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.471993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:44.366 [2024-11-05 11:34:43.472212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:44.367 [2024-11-05 11:34:43.472996] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:44.367 [2024-11-05 11:34:43.473007] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:18:44.367 [2024-11-05 11:34:43.473015] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:44.367 [2024-11-05 11:34:43.473030] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:44.367 [2024-11-05 11:34:43.473037] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:44.367 [2024-11-05 11:34:43.473048] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:44.367 [2024-11-05 11:34:43.473058] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:44.367 [2024-11-05 11:34:43.473068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:44.367 [2024-11-05 11:34:43.473076] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:44.367 [2024-11-05 11:34:43.473084] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:44.367 [2024-11-05 11:34:43.473091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:44.367 [2024-11-05 11:34:43.473100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.367 [2024-11-05 11:34:43.473108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:44.367 [2024-11-05 11:34:43.473119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.245 ms 00:18:44.367 [2024-11-05 11:34:43.473126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.486766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.367 [2024-11-05 11:34:43.486834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:44.367 [2024-11-05 11:34:43.486848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.578 ms 00:18:44.367 [2024-11-05 11:34:43.486856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.487254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:44.367 [2024-11-05 11:34:43.487272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:44.367 [2024-11-05 11:34:43.487284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:18:44.367 [2024-11-05 11:34:43.487292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.534040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.367 [2024-11-05 11:34:43.534096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:44.367 [2024-11-05 11:34:43.534110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.367 [2024-11-05 11:34:43.534119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.534190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.367 [2024-11-05 11:34:43.534199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:44.367 [2024-11-05 11:34:43.534209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.367 [2024-11-05 11:34:43.534218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.534315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.367 [2024-11-05 11:34:43.534326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:44.367 [2024-11-05 11:34:43.534337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.367 [2024-11-05 11:34:43.534345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.367 [2024-11-05 11:34:43.534368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.368 [2024-11-05 11:34:43.534377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:44.368 [2024-11-05 11:34:43.534387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.368 [2024-11-05 11:34:43.534396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.368 [2024-11-05 11:34:43.620714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.368 [2024-11-05 11:34:43.620775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:44.368 [2024-11-05 11:34:43.620791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.368 [2024-11-05 11:34:43.620814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.691780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.691855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:44.630 [2024-11-05 11:34:43.691871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.691880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.691996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:44.630 [2024-11-05 11:34:43.692021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:44.630 [2024-11-05 11:34:43.692103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:44.630 [2024-11-05 11:34:43.692236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:44.630 [2024-11-05 11:34:43.692301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:44.630 [2024-11-05 11:34:43.692374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:44.630 [2024-11-05 11:34:43.692447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:44.630 [2024-11-05 11:34:43.692457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:44.630 [2024-11-05 11:34:43.692465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:44.630 [2024-11-05 11:34:43.692615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 384.666 ms, result 0 00:18:44.630 true 00:18:44.630 11:34:43 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74328 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74328 ']' 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74328 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74328 00:18:44.630 killing process with pid 74328 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74328' 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 74328 00:18:44.630 11:34:43 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 74328 00:18:47.174 11:34:46 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:51.383 262144+0 records in 00:18:51.383 262144+0 records out 00:18:51.383 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.08126 s, 263 MB/s 00:18:51.383 11:34:50 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:53.926 11:34:52 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:53.926 [2024-11-05 11:34:52.660320] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:18:53.926 [2024-11-05 11:34:52.660469] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74564 ] 00:18:53.926 [2024-11-05 11:34:52.824743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.926 [2024-11-05 11:34:52.942284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.190 [2024-11-05 11:34:53.229658] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.190 [2024-11-05 11:34:53.229746] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:54.190 [2024-11-05 11:34:53.390305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.390367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:54.190 [2024-11-05 11:34:53.390386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:18:54.190 [2024-11-05 11:34:53.390394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.390451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.390461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:54.190 [2024-11-05 11:34:53.390473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:54.190 [2024-11-05 11:34:53.390481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.390502] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:54.190 [2024-11-05 11:34:53.391392] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:54.190 [2024-11-05 11:34:53.391438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.391447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:54.190 [2024-11-05 11:34:53.391457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.940 ms 00:18:54.190 [2024-11-05 11:34:53.391465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.393114] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:54.190 [2024-11-05 11:34:53.407186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.407234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:54.190 [2024-11-05 11:34:53.407248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.073 ms 00:18:54.190 [2024-11-05 11:34:53.407256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.407334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.407346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:54.190 [2024-11-05 11:34:53.407356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:18:54.190 [2024-11-05 11:34:53.407364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.415448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.415491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:54.190 [2024-11-05 11:34:53.415503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.002 ms 00:18:54.190 [2024-11-05 11:34:53.415511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.415596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.415605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:54.190 [2024-11-05 11:34:53.415614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:54.190 [2024-11-05 11:34:53.415622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.415666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.415676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:54.190 [2024-11-05 11:34:53.415684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:18:54.190 [2024-11-05 11:34:53.415692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.415716] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:54.190 [2024-11-05 11:34:53.419689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.419734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:54.190 [2024-11-05 11:34:53.419744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.978 ms 00:18:54.190 [2024-11-05 11:34:53.419755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.419791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.190 [2024-11-05 11:34:53.419812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:54.190 [2024-11-05 11:34:53.419821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:54.190 [2024-11-05 11:34:53.419829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.190 [2024-11-05 11:34:53.419881] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:54.190 [2024-11-05 11:34:53.419904] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:54.190 [2024-11-05 11:34:53.419942] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:54.190 [2024-11-05 11:34:53.419962] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:54.190 [2024-11-05 11:34:53.420067] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:54.191 [2024-11-05 11:34:53.420078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:54.191 [2024-11-05 11:34:53.420089] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:54.191 [2024-11-05 11:34:53.420100] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420110] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420119] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:54.191 [2024-11-05 11:34:53.420127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:54.191 [2024-11-05 11:34:53.420136] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:54.191 [2024-11-05 11:34:53.420145] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:54.191 [2024-11-05 11:34:53.420157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.191 [2024-11-05 11:34:53.420165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:54.191 [2024-11-05 11:34:53.420173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:18:54.191 [2024-11-05 11:34:53.420181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.191 [2024-11-05 11:34:53.420270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.191 [2024-11-05 11:34:53.420280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:54.191 [2024-11-05 11:34:53.420294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:18:54.191 [2024-11-05 11:34:53.420301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.191 [2024-11-05 11:34:53.420404] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:54.191 [2024-11-05 11:34:53.420417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:54.191 [2024-11-05 11:34:53.420425] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420433] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:54.191 [2024-11-05 11:34:53.420448] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420455] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420462] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:54.191 [2024-11-05 11:34:53.420469] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420475] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.191 [2024-11-05 11:34:53.420483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:54.191 [2024-11-05 11:34:53.420489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:54.191 [2024-11-05 11:34:53.420496] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:54.191 [2024-11-05 11:34:53.420502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:54.191 [2024-11-05 11:34:53.420510] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:54.191 [2024-11-05 11:34:53.420525] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:54.191 [2024-11-05 11:34:53.420540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:54.191 [2024-11-05 11:34:53.420561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:54.191 [2024-11-05 11:34:53.420582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420589] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:54.191 [2024-11-05 11:34:53.420601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420614] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:54.191 [2024-11-05 11:34:53.420621] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:54.191 [2024-11-05 11:34:53.420641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.191 [2024-11-05 11:34:53.420654] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:54.191 [2024-11-05 11:34:53.420660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:54.191 [2024-11-05 11:34:53.420666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:54.191 [2024-11-05 11:34:53.420672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:54.191 [2024-11-05 11:34:53.420679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:54.191 [2024-11-05 11:34:53.420685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:54.191 [2024-11-05 11:34:53.420698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:54.191 [2024-11-05 11:34:53.420705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420711] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:54.191 [2024-11-05 11:34:53.420719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:54.191 [2024-11-05 11:34:53.420726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420734] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:54.191 [2024-11-05 11:34:53.420744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:54.191 [2024-11-05 11:34:53.420752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:54.191 [2024-11-05 11:34:53.420759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:54.191 [2024-11-05 11:34:53.420766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:54.191 [2024-11-05 11:34:53.420773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:54.191 [2024-11-05 11:34:53.420779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:54.191 [2024-11-05 11:34:53.420788] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:54.191 [2024-11-05 11:34:53.420811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420821] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:54.191 [2024-11-05 11:34:53.420829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:54.191 [2024-11-05 11:34:53.420837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:54.191 [2024-11-05 11:34:53.420845] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:54.191 [2024-11-05 11:34:53.420853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:54.191 [2024-11-05 11:34:53.420860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:54.191 [2024-11-05 11:34:53.420866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:54.191 [2024-11-05 11:34:53.420874] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:54.191 [2024-11-05 11:34:53.420881] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:54.191 [2024-11-05 11:34:53.420888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420916] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:54.191 [2024-11-05 11:34:53.420924] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:54.191 [2024-11-05 11:34:53.420933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420944] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:54.191 [2024-11-05 11:34:53.420951] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:54.191 [2024-11-05 11:34:53.420959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:54.191 [2024-11-05 11:34:53.420967] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:54.191 [2024-11-05 11:34:53.420975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.191 [2024-11-05 11:34:53.420983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:54.191 [2024-11-05 11:34:53.420991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.640 ms 00:18:54.191 [2024-11-05 11:34:53.420998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.191 [2024-11-05 11:34:53.452780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.191 [2024-11-05 11:34:53.452852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:54.191 [2024-11-05 11:34:53.452864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.735 ms 00:18:54.191 [2024-11-05 11:34:53.452873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.191 [2024-11-05 11:34:53.452964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.192 [2024-11-05 11:34:53.452978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:54.192 [2024-11-05 11:34:53.452987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:54.192 [2024-11-05 11:34:53.452994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.498462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.498516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:54.454 [2024-11-05 11:34:53.498529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.408 ms 00:18:54.454 [2024-11-05 11:34:53.498538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.498604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.498614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:54.454 [2024-11-05 11:34:53.498624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:54.454 [2024-11-05 11:34:53.498636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.499248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.499290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:54.454 [2024-11-05 11:34:53.499302] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.533 ms 00:18:54.454 [2024-11-05 11:34:53.499309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.499462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.499472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:54.454 [2024-11-05 11:34:53.499481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:18:54.454 [2024-11-05 11:34:53.499489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.515346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.515391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:54.454 [2024-11-05 11:34:53.515403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.832 ms 00:18:54.454 [2024-11-05 11:34:53.515415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.529624] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:54.454 [2024-11-05 11:34:53.529683] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:54.454 [2024-11-05 11:34:53.529697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.529706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:54.454 [2024-11-05 11:34:53.529716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.171 ms 00:18:54.454 [2024-11-05 11:34:53.529724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.555372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.555423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:54.454 [2024-11-05 11:34:53.555443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.586 ms 00:18:54.454 [2024-11-05 11:34:53.555451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.568446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.568506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:54.454 [2024-11-05 11:34:53.568517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.939 ms 00:18:54.454 [2024-11-05 11:34:53.568525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.581133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.581196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:54.454 [2024-11-05 11:34:53.581208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.560 ms 00:18:54.454 [2024-11-05 11:34:53.581216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.581899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.581931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:54.454 [2024-11-05 11:34:53.581943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:18:54.454 [2024-11-05 11:34:53.581951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.645977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.646036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:54.454 [2024-11-05 11:34:53.646053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.005 ms 00:18:54.454 [2024-11-05 11:34:53.646062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.657240] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:54.454 [2024-11-05 11:34:53.660411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.660455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:54.454 [2024-11-05 11:34:53.660468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.283 ms 00:18:54.454 [2024-11-05 11:34:53.660477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.660571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.660582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:54.454 [2024-11-05 11:34:53.660593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:54.454 [2024-11-05 11:34:53.660601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.660674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.660689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:54.454 [2024-11-05 11:34:53.660700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:18:54.454 [2024-11-05 11:34:53.660708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.660729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.660746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:54.454 [2024-11-05 11:34:53.660756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:54.454 [2024-11-05 11:34:53.660765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.660816] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:54.454 [2024-11-05 11:34:53.660828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.660837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:54.454 [2024-11-05 11:34:53.660849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:18:54.454 [2024-11-05 11:34:53.660858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.454 [2024-11-05 11:34:53.687023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.454 [2024-11-05 11:34:53.687087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:54.455 [2024-11-05 11:34:53.687102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.144 ms 00:18:54.455 [2024-11-05 11:34:53.687111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.455 [2024-11-05 11:34:53.687205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:54.455 [2024-11-05 11:34:53.687215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:54.455 [2024-11-05 11:34:53.687225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:18:54.455 [2024-11-05 11:34:53.687234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:54.455 [2024-11-05 11:34:53.688476] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 297.685 ms, result 0 00:18:55.879  [2024-11-05T11:34:55.722Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-05T11:34:57.110Z] Copying: 38/1024 [MB] (19 MBps) [2024-11-05T11:34:58.055Z] Copying: 57/1024 [MB] (19 MBps) [2024-11-05T11:34:58.999Z] Copying: 71/1024 [MB] (13 MBps) [2024-11-05T11:34:59.944Z] Copying: 82/1024 [MB] (10 MBps) [2024-11-05T11:35:00.888Z] Copying: 92/1024 [MB] (10 MBps) [2024-11-05T11:35:01.832Z] Copying: 102/1024 [MB] (10 MBps) [2024-11-05T11:35:02.799Z] Copying: 114/1024 [MB] (11 MBps) [2024-11-05T11:35:03.743Z] Copying: 167/1024 [MB] (52 MBps) [2024-11-05T11:35:05.130Z] Copying: 221/1024 [MB] (54 MBps) [2024-11-05T11:35:05.704Z] Copying: 259/1024 [MB] (37 MBps) [2024-11-05T11:35:07.094Z] Copying: 277/1024 [MB] (17 MBps) [2024-11-05T11:35:08.037Z] Copying: 287/1024 [MB] (10 MBps) [2024-11-05T11:35:08.981Z] Copying: 327/1024 [MB] (39 MBps) [2024-11-05T11:35:09.965Z] Copying: 350/1024 [MB] (23 MBps) [2024-11-05T11:35:10.908Z] Copying: 363/1024 [MB] (12 MBps) [2024-11-05T11:35:11.852Z] Copying: 379/1024 [MB] (16 MBps) [2024-11-05T11:35:12.797Z] Copying: 394/1024 [MB] (14 MBps) [2024-11-05T11:35:13.742Z] Copying: 410/1024 [MB] (16 MBps) [2024-11-05T11:35:15.131Z] Copying: 428/1024 [MB] (18 MBps) [2024-11-05T11:35:15.704Z] Copying: 449/1024 [MB] (20 MBps) [2024-11-05T11:35:17.125Z] Copying: 469/1024 [MB] (19 MBps) [2024-11-05T11:35:18.070Z] Copying: 490/1024 [MB] (21 MBps) [2024-11-05T11:35:19.016Z] Copying: 510/1024 [MB] (19 MBps) [2024-11-05T11:35:19.957Z] Copying: 530/1024 [MB] (19 MBps) [2024-11-05T11:35:20.904Z] Copying: 547/1024 [MB] (16 MBps) [2024-11-05T11:35:21.849Z] Copying: 570/1024 [MB] (23 MBps) [2024-11-05T11:35:22.795Z] Copying: 581/1024 [MB] (10 MBps) [2024-11-05T11:35:23.741Z] Copying: 591/1024 [MB] (10 MBps) [2024-11-05T11:35:25.125Z] Copying: 601/1024 [MB] (10 MBps) [2024-11-05T11:35:25.699Z] Copying: 612/1024 [MB] (10 MBps) [2024-11-05T11:35:27.087Z] Copying: 622/1024 [MB] (10 MBps) [2024-11-05T11:35:28.031Z] Copying: 632/1024 [MB] (10 MBps) [2024-11-05T11:35:28.975Z] Copying: 658/1024 [MB] (26 MBps) [2024-11-05T11:35:29.917Z] Copying: 685/1024 [MB] (26 MBps) [2024-11-05T11:35:30.861Z] Copying: 698/1024 [MB] (12 MBps) [2024-11-05T11:35:31.805Z] Copying: 712/1024 [MB] (14 MBps) [2024-11-05T11:35:32.748Z] Copying: 733/1024 [MB] (20 MBps) [2024-11-05T11:35:34.128Z] Copying: 764/1024 [MB] (31 MBps) [2024-11-05T11:35:34.700Z] Copying: 795/1024 [MB] (30 MBps) [2024-11-05T11:35:36.084Z] Copying: 819/1024 [MB] (24 MBps) [2024-11-05T11:35:37.030Z] Copying: 841/1024 [MB] (22 MBps) [2024-11-05T11:35:37.981Z] Copying: 866/1024 [MB] (24 MBps) [2024-11-05T11:35:38.925Z] Copying: 886/1024 [MB] (19 MBps) [2024-11-05T11:35:39.864Z] Copying: 905/1024 [MB] (19 MBps) [2024-11-05T11:35:40.806Z] Copying: 927/1024 [MB] (22 MBps) [2024-11-05T11:35:41.748Z] Copying: 945/1024 [MB] (17 MBps) [2024-11-05T11:35:43.135Z] Copying: 964/1024 [MB] (19 MBps) [2024-11-05T11:35:43.707Z] Copying: 981/1024 [MB] (16 MBps) [2024-11-05T11:35:45.094Z] Copying: 996/1024 [MB] (14 MBps) [2024-11-05T11:35:46.037Z] Copying: 1030064/1048576 [kB] (9808 kBps) [2024-11-05T11:35:46.606Z] Copying: 1039812/1048576 [kB] (9748 kBps) [2024-11-05T11:35:46.606Z] Copying: 1024/1024 [MB] (average 19 MBps)[2024-11-05 11:35:46.555631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.332 [2024-11-05 11:35:46.555691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:47.332 [2024-11-05 11:35:46.555708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:47.332 [2024-11-05 11:35:46.555717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.332 [2024-11-05 11:35:46.555741] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:47.332 [2024-11-05 11:35:46.558880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.332 [2024-11-05 11:35:46.558930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:47.332 [2024-11-05 11:35:46.558944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.122 ms 00:19:47.332 [2024-11-05 11:35:46.558952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.332 [2024-11-05 11:35:46.562140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.332 [2024-11-05 11:35:46.562191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:47.332 [2024-11-05 11:35:46.562202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.150 ms 00:19:47.332 [2024-11-05 11:35:46.562211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.332 [2024-11-05 11:35:46.582489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.332 [2024-11-05 11:35:46.582547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:47.332 [2024-11-05 11:35:46.582559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.261 ms 00:19:47.332 [2024-11-05 11:35:46.582567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.332 [2024-11-05 11:35:46.588773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.332 [2024-11-05 11:35:46.588848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:47.332 [2024-11-05 11:35:46.588860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.162 ms 00:19:47.332 [2024-11-05 11:35:46.588867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.616606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.595 [2024-11-05 11:35:46.616662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:47.595 [2024-11-05 11:35:46.616675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.677 ms 00:19:47.595 [2024-11-05 11:35:46.616682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.633064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.595 [2024-11-05 11:35:46.633121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:47.595 [2024-11-05 11:35:46.633135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.330 ms 00:19:47.595 [2024-11-05 11:35:46.633143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.633289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.595 [2024-11-05 11:35:46.633303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:47.595 [2024-11-05 11:35:46.633313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:19:47.595 [2024-11-05 11:35:46.633330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.659992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.595 [2024-11-05 11:35:46.660047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:47.595 [2024-11-05 11:35:46.660061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.647 ms 00:19:47.595 [2024-11-05 11:35:46.660069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.686362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.595 [2024-11-05 11:35:46.686417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:47.595 [2024-11-05 11:35:46.686444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.244 ms 00:19:47.595 [2024-11-05 11:35:46.686452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.595 [2024-11-05 11:35:46.711909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.596 [2024-11-05 11:35:46.711962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:47.596 [2024-11-05 11:35:46.711974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.406 ms 00:19:47.596 [2024-11-05 11:35:46.711981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.596 [2024-11-05 11:35:46.737146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.596 [2024-11-05 11:35:46.737200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:47.596 [2024-11-05 11:35:46.737212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.045 ms 00:19:47.596 [2024-11-05 11:35:46.737219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.596 [2024-11-05 11:35:46.737267] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:47.596 [2024-11-05 11:35:46.737283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:47.596 [2024-11-05 11:35:46.737950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.737999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:47.597 [2024-11-05 11:35:46.738252] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:47.597 [2024-11-05 11:35:46.738266] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:19:47.597 [2024-11-05 11:35:46.738275] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:47.597 [2024-11-05 11:35:46.738286] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:47.597 [2024-11-05 11:35:46.738294] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:47.597 [2024-11-05 11:35:46.738303] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:47.597 [2024-11-05 11:35:46.738310] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:47.597 [2024-11-05 11:35:46.738318] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:47.597 [2024-11-05 11:35:46.738327] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:47.597 [2024-11-05 11:35:46.738342] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:47.597 [2024-11-05 11:35:46.738349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:47.597 [2024-11-05 11:35:46.738357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.597 [2024-11-05 11:35:46.738366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:47.597 [2024-11-05 11:35:46.738375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.091 ms 00:19:47.597 [2024-11-05 11:35:46.738382] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.751964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.597 [2024-11-05 11:35:46.752014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:47.597 [2024-11-05 11:35:46.752025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.560 ms 00:19:47.597 [2024-11-05 11:35:46.752033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.752438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:47.597 [2024-11-05 11:35:46.752456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:47.597 [2024-11-05 11:35:46.752466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:19:47.597 [2024-11-05 11:35:46.752474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.788920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.597 [2024-11-05 11:35:46.788978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:47.597 [2024-11-05 11:35:46.788991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.597 [2024-11-05 11:35:46.789001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.789074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.597 [2024-11-05 11:35:46.789084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:47.597 [2024-11-05 11:35:46.789094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.597 [2024-11-05 11:35:46.789104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.789185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.597 [2024-11-05 11:35:46.789197] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:47.597 [2024-11-05 11:35:46.789205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.597 [2024-11-05 11:35:46.789213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.597 [2024-11-05 11:35:46.789229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.597 [2024-11-05 11:35:46.789238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:47.597 [2024-11-05 11:35:46.789247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.597 [2024-11-05 11:35:46.789254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.859 [2024-11-05 11:35:46.873778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.859 [2024-11-05 11:35:46.873848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:47.859 [2024-11-05 11:35:46.873862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.873872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:47.860 [2024-11-05 11:35:46.944117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:47.860 [2024-11-05 11:35:46.944239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:47.860 [2024-11-05 11:35:46.944309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:47.860 [2024-11-05 11:35:46.944434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:47.860 [2024-11-05 11:35:46.944495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:47.860 [2024-11-05 11:35:46.944576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:47.860 [2024-11-05 11:35:46.944651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:47.860 [2024-11-05 11:35:46.944661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:47.860 [2024-11-05 11:35:46.944669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:47.860 [2024-11-05 11:35:46.944831] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.133 ms, result 0 00:19:48.802 00:19:48.802 00:19:48.802 11:35:47 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:19:48.802 [2024-11-05 11:35:48.067280] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:19:48.802 [2024-11-05 11:35:48.067439] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75143 ] 00:19:49.063 [2024-11-05 11:35:48.221927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.324 [2024-11-05 11:35:48.343586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.587 [2024-11-05 11:35:48.633788] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.587 [2024-11-05 11:35:48.633894] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:49.587 [2024-11-05 11:35:48.796053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.796125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:49.587 [2024-11-05 11:35:48.796144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:49.587 [2024-11-05 11:35:48.796153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.796210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.796222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:49.587 [2024-11-05 11:35:48.796234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:49.587 [2024-11-05 11:35:48.796242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.796262] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:49.587 [2024-11-05 11:35:48.796991] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:49.587 [2024-11-05 11:35:48.797036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.797045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:49.587 [2024-11-05 11:35:48.797055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.779 ms 00:19:49.587 [2024-11-05 11:35:48.797063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.799373] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:49.587 [2024-11-05 11:35:48.813868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.813942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:49.587 [2024-11-05 11:35:48.813958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.498 ms 00:19:49.587 [2024-11-05 11:35:48.813967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.814055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.814070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:49.587 [2024-11-05 11:35:48.814079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:49.587 [2024-11-05 11:35:48.814087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.822566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.822617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:49.587 [2024-11-05 11:35:48.822652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.395 ms 00:19:49.587 [2024-11-05 11:35:48.822661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.822752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.587 [2024-11-05 11:35:48.822761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:49.587 [2024-11-05 11:35:48.822770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:19:49.587 [2024-11-05 11:35:48.822778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.587 [2024-11-05 11:35:48.822845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.588 [2024-11-05 11:35:48.822857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:49.588 [2024-11-05 11:35:48.822866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:49.588 [2024-11-05 11:35:48.822873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.588 [2024-11-05 11:35:48.822899] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:49.588 [2024-11-05 11:35:48.827071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.588 [2024-11-05 11:35:48.827119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:49.588 [2024-11-05 11:35:48.827129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.180 ms 00:19:49.588 [2024-11-05 11:35:48.827141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.588 [2024-11-05 11:35:48.827177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.588 [2024-11-05 11:35:48.827186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:49.588 [2024-11-05 11:35:48.827195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:49.588 [2024-11-05 11:35:48.827203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.588 [2024-11-05 11:35:48.827257] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:49.588 [2024-11-05 11:35:48.827280] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:49.588 [2024-11-05 11:35:48.827319] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:49.588 [2024-11-05 11:35:48.827339] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:49.588 [2024-11-05 11:35:48.827444] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:49.588 [2024-11-05 11:35:48.827456] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:49.588 [2024-11-05 11:35:48.827467] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:49.588 [2024-11-05 11:35:48.827479] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827489] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827498] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:49.588 [2024-11-05 11:35:48.827505] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:49.588 [2024-11-05 11:35:48.827513] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:49.588 [2024-11-05 11:35:48.827520] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:49.588 [2024-11-05 11:35:48.827531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.588 [2024-11-05 11:35:48.827539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:49.588 [2024-11-05 11:35:48.827547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:19:49.588 [2024-11-05 11:35:48.827555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.588 [2024-11-05 11:35:48.827639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.588 [2024-11-05 11:35:48.827649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:49.588 [2024-11-05 11:35:48.827656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:49.588 [2024-11-05 11:35:48.827663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.588 [2024-11-05 11:35:48.827768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:49.588 [2024-11-05 11:35:48.827782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:49.588 [2024-11-05 11:35:48.827790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827825] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:49.588 [2024-11-05 11:35:48.827834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:49.588 [2024-11-05 11:35:48.827856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.588 [2024-11-05 11:35:48.827870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:49.588 [2024-11-05 11:35:48.827878] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:49.588 [2024-11-05 11:35:48.827886] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:49.588 [2024-11-05 11:35:48.827893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:49.588 [2024-11-05 11:35:48.827900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:49.588 [2024-11-05 11:35:48.827913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:49.588 [2024-11-05 11:35:48.827927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:49.588 [2024-11-05 11:35:48.827948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827961] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:49.588 [2024-11-05 11:35:48.827968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827976] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.588 [2024-11-05 11:35:48.827982] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:49.588 [2024-11-05 11:35:48.827989] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:49.588 [2024-11-05 11:35:48.827995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.588 [2024-11-05 11:35:48.828002] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:49.588 [2024-11-05 11:35:48.828009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:49.588 [2024-11-05 11:35:48.828017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:49.588 [2024-11-05 11:35:48.828023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:49.588 [2024-11-05 11:35:48.828038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:49.588 [2024-11-05 11:35:48.828046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.588 [2024-11-05 11:35:48.828052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:49.588 [2024-11-05 11:35:48.828059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:49.588 [2024-11-05 11:35:48.828065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:49.588 [2024-11-05 11:35:48.828072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:49.588 [2024-11-05 11:35:48.828079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:49.588 [2024-11-05 11:35:48.828086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.828092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:49.588 [2024-11-05 11:35:48.828099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:49.588 [2024-11-05 11:35:48.828106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.828113] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:49.588 [2024-11-05 11:35:48.828121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:49.588 [2024-11-05 11:35:48.828129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:49.588 [2024-11-05 11:35:48.828136] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:49.588 [2024-11-05 11:35:48.828144] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:49.588 [2024-11-05 11:35:48.828151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:49.588 [2024-11-05 11:35:48.828158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:49.588 [2024-11-05 11:35:48.828166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:49.588 [2024-11-05 11:35:48.828172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:49.588 [2024-11-05 11:35:48.828178] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:49.588 [2024-11-05 11:35:48.828187] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:49.588 [2024-11-05 11:35:48.828196] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.588 [2024-11-05 11:35:48.828205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:49.588 [2024-11-05 11:35:48.828212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:49.588 [2024-11-05 11:35:48.828220] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:49.588 [2024-11-05 11:35:48.828227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:49.588 [2024-11-05 11:35:48.828235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:49.588 [2024-11-05 11:35:48.828242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:49.588 [2024-11-05 11:35:48.828249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:49.588 [2024-11-05 11:35:48.828256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:49.588 [2024-11-05 11:35:48.828263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:49.588 [2024-11-05 11:35:48.828270] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:49.588 [2024-11-05 11:35:48.828277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:49.588 [2024-11-05 11:35:48.828284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:49.589 [2024-11-05 11:35:48.828292] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:49.589 [2024-11-05 11:35:48.828300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:49.589 [2024-11-05 11:35:48.828308] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:49.589 [2024-11-05 11:35:48.828318] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:49.589 [2024-11-05 11:35:48.828333] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:49.589 [2024-11-05 11:35:48.828342] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:49.589 [2024-11-05 11:35:48.828350] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:49.589 [2024-11-05 11:35:48.828357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:49.589 [2024-11-05 11:35:48.828367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.589 [2024-11-05 11:35:48.828375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:49.589 [2024-11-05 11:35:48.828383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:19:49.589 [2024-11-05 11:35:48.828391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.589 [2024-11-05 11:35:48.860751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.589 [2024-11-05 11:35:48.860826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:49.589 [2024-11-05 11:35:48.860840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.314 ms 00:19:49.589 [2024-11-05 11:35:48.860850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.589 [2024-11-05 11:35:48.860945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.589 [2024-11-05 11:35:48.860961] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:49.589 [2024-11-05 11:35:48.860971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:49.589 [2024-11-05 11:35:48.860979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.908285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.908347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:49.851 [2024-11-05 11:35:48.908361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.244 ms 00:19:49.851 [2024-11-05 11:35:48.908370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.908423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.908433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:49.851 [2024-11-05 11:35:48.908443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:49.851 [2024-11-05 11:35:48.908455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.909125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.909169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:49.851 [2024-11-05 11:35:48.909180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:19:49.851 [2024-11-05 11:35:48.909189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.909347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.909368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:49.851 [2024-11-05 11:35:48.909378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:19:49.851 [2024-11-05 11:35:48.909386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.925346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.925399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:49.851 [2024-11-05 11:35:48.925411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.934 ms 00:19:49.851 [2024-11-05 11:35:48.925422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.940121] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:49.851 [2024-11-05 11:35:48.940178] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:49.851 [2024-11-05 11:35:48.940197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.940206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:49.851 [2024-11-05 11:35:48.940215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.658 ms 00:19:49.851 [2024-11-05 11:35:48.940223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.966713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.966780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:49.851 [2024-11-05 11:35:48.966793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.430 ms 00:19:49.851 [2024-11-05 11:35:48.966814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.980356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.980400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:49.851 [2024-11-05 11:35:48.980413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.505 ms 00:19:49.851 [2024-11-05 11:35:48.980421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.993407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.993459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:49.851 [2024-11-05 11:35:48.993471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.926 ms 00:19:49.851 [2024-11-05 11:35:48.993479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:48.994180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:48.994212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:49.851 [2024-11-05 11:35:48.994223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.581 ms 00:19:49.851 [2024-11-05 11:35:48.994231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:49.062280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:49.062346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:49.851 [2024-11-05 11:35:49.062364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.024 ms 00:19:49.851 [2024-11-05 11:35:49.062381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:49.073712] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:49.851 [2024-11-05 11:35:49.077008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:49.077058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:49.851 [2024-11-05 11:35:49.077072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.560 ms 00:19:49.851 [2024-11-05 11:35:49.077081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:49.077178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:49.077190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:49.851 [2024-11-05 11:35:49.077200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:49.851 [2024-11-05 11:35:49.077211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:49.077288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:49.077299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:49.851 [2024-11-05 11:35:49.077308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:49.851 [2024-11-05 11:35:49.077317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.851 [2024-11-05 11:35:49.077339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.851 [2024-11-05 11:35:49.077348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:49.851 [2024-11-05 11:35:49.077356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:49.852 [2024-11-05 11:35:49.077364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-11-05 11:35:49.077401] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:49.852 [2024-11-05 11:35:49.077415] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-11-05 11:35:49.077424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:49.852 [2024-11-05 11:35:49.077432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:49.852 [2024-11-05 11:35:49.077440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-11-05 11:35:49.103265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-11-05 11:35:49.103319] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:49.852 [2024-11-05 11:35:49.103334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.804 ms 00:19:49.852 [2024-11-05 11:35:49.103343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-11-05 11:35:49.103443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:49.852 [2024-11-05 11:35:49.103454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:49.852 [2024-11-05 11:35:49.103464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:19:49.852 [2024-11-05 11:35:49.103473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:49.852 [2024-11-05 11:35:49.104761] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 308.218 ms, result 0 00:19:51.239  [2024-11-05T11:35:51.456Z] Copying: 18/1024 [MB] (18 MBps) [2024-11-05T11:35:52.408Z] Copying: 33/1024 [MB] (14 MBps) [2024-11-05T11:35:53.353Z] Copying: 56/1024 [MB] (23 MBps) [2024-11-05T11:35:54.295Z] Copying: 72/1024 [MB] (15 MBps) [2024-11-05T11:35:55.684Z] Copying: 87/1024 [MB] (15 MBps) [2024-11-05T11:35:56.627Z] Copying: 103/1024 [MB] (15 MBps) [2024-11-05T11:35:57.569Z] Copying: 121/1024 [MB] (18 MBps) [2024-11-05T11:35:58.511Z] Copying: 131/1024 [MB] (10 MBps) [2024-11-05T11:35:59.491Z] Copying: 144/1024 [MB] (12 MBps) [2024-11-05T11:36:00.434Z] Copying: 155/1024 [MB] (10 MBps) [2024-11-05T11:36:01.379Z] Copying: 181/1024 [MB] (26 MBps) [2024-11-05T11:36:02.324Z] Copying: 207/1024 [MB] (25 MBps) [2024-11-05T11:36:03.714Z] Copying: 219/1024 [MB] (12 MBps) [2024-11-05T11:36:04.288Z] Copying: 237/1024 [MB] (18 MBps) [2024-11-05T11:36:05.675Z] Copying: 259/1024 [MB] (22 MBps) [2024-11-05T11:36:06.624Z] Copying: 280/1024 [MB] (20 MBps) [2024-11-05T11:36:07.568Z] Copying: 299/1024 [MB] (19 MBps) [2024-11-05T11:36:08.513Z] Copying: 319/1024 [MB] (19 MBps) [2024-11-05T11:36:09.458Z] Copying: 332/1024 [MB] (12 MBps) [2024-11-05T11:36:10.400Z] Copying: 346/1024 [MB] (14 MBps) [2024-11-05T11:36:11.333Z] Copying: 360/1024 [MB] (13 MBps) [2024-11-05T11:36:12.717Z] Copying: 380/1024 [MB] (19 MBps) [2024-11-05T11:36:13.290Z] Copying: 401/1024 [MB] (21 MBps) [2024-11-05T11:36:14.710Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-05T11:36:15.651Z] Copying: 425/1024 [MB] (12 MBps) [2024-11-05T11:36:16.590Z] Copying: 439/1024 [MB] (14 MBps) [2024-11-05T11:36:17.532Z] Copying: 458/1024 [MB] (18 MBps) [2024-11-05T11:36:18.475Z] Copying: 473/1024 [MB] (15 MBps) [2024-11-05T11:36:19.416Z] Copying: 489/1024 [MB] (15 MBps) [2024-11-05T11:36:20.363Z] Copying: 502/1024 [MB] (13 MBps) [2024-11-05T11:36:21.309Z] Copying: 516/1024 [MB] (14 MBps) [2024-11-05T11:36:22.697Z] Copying: 527/1024 [MB] (10 MBps) [2024-11-05T11:36:23.641Z] Copying: 540/1024 [MB] (13 MBps) [2024-11-05T11:36:24.583Z] Copying: 552/1024 [MB] (11 MBps) [2024-11-05T11:36:25.528Z] Copying: 562/1024 [MB] (10 MBps) [2024-11-05T11:36:26.469Z] Copying: 575/1024 [MB] (12 MBps) [2024-11-05T11:36:27.408Z] Copying: 588/1024 [MB] (13 MBps) [2024-11-05T11:36:28.389Z] Copying: 599/1024 [MB] (10 MBps) [2024-11-05T11:36:29.331Z] Copying: 610/1024 [MB] (10 MBps) [2024-11-05T11:36:30.711Z] Copying: 620/1024 [MB] (10 MBps) [2024-11-05T11:36:31.656Z] Copying: 631/1024 [MB] (10 MBps) [2024-11-05T11:36:32.604Z] Copying: 643/1024 [MB] (12 MBps) [2024-11-05T11:36:33.552Z] Copying: 674/1024 [MB] (31 MBps) [2024-11-05T11:36:34.500Z] Copying: 685/1024 [MB] (10 MBps) [2024-11-05T11:36:35.447Z] Copying: 696/1024 [MB] (10 MBps) [2024-11-05T11:36:36.392Z] Copying: 706/1024 [MB] (10 MBps) [2024-11-05T11:36:37.338Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-05T11:36:38.728Z] Copying: 728/1024 [MB] (11 MBps) [2024-11-05T11:36:39.302Z] Copying: 754/1024 [MB] (25 MBps) [2024-11-05T11:36:40.692Z] Copying: 771/1024 [MB] (17 MBps) [2024-11-05T11:36:41.632Z] Copying: 791/1024 [MB] (19 MBps) [2024-11-05T11:36:42.582Z] Copying: 814/1024 [MB] (23 MBps) [2024-11-05T11:36:43.524Z] Copying: 838/1024 [MB] (23 MBps) [2024-11-05T11:36:44.464Z] Copying: 860/1024 [MB] (22 MBps) [2024-11-05T11:36:45.407Z] Copying: 883/1024 [MB] (23 MBps) [2024-11-05T11:36:46.351Z] Copying: 899/1024 [MB] (15 MBps) [2024-11-05T11:36:47.299Z] Copying: 917/1024 [MB] (18 MBps) [2024-11-05T11:36:48.689Z] Copying: 940/1024 [MB] (23 MBps) [2024-11-05T11:36:49.629Z] Copying: 969/1024 [MB] (28 MBps) [2024-11-05T11:36:50.579Z] Copying: 990/1024 [MB] (20 MBps) [2024-11-05T11:36:50.837Z] Copying: 1014/1024 [MB] (24 MBps) [2024-11-05T11:36:51.096Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-11-05 11:36:50.899177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.899259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:51.822 [2024-11-05 11:36:50.899277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:51.822 [2024-11-05 11:36:50.899287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.899314] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:51.822 [2024-11-05 11:36:50.902310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.902342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:51.822 [2024-11-05 11:36:50.902355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.979 ms 00:20:51.822 [2024-11-05 11:36:50.902370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.902619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.902631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:51.822 [2024-11-05 11:36:50.902640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:20:51.822 [2024-11-05 11:36:50.902649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.906720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.906742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:51.822 [2024-11-05 11:36:50.906752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.056 ms 00:20:51.822 [2024-11-05 11:36:50.906761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.914568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.914603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:51.822 [2024-11-05 11:36:50.914613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.785 ms 00:20:51.822 [2024-11-05 11:36:50.914620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.939031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.939066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:51.822 [2024-11-05 11:36:50.939076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.356 ms 00:20:51.822 [2024-11-05 11:36:50.939083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.953132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.953164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:51.822 [2024-11-05 11:36:50.953175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.017 ms 00:20:51.822 [2024-11-05 11:36:50.953182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.953290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.953301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:51.822 [2024-11-05 11:36:50.953313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:20:51.822 [2024-11-05 11:36:50.953320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.976433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.976464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:51.822 [2024-11-05 11:36:50.976474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.100 ms 00:20:51.822 [2024-11-05 11:36:50.976481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:50.999157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:50.999194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:51.822 [2024-11-05 11:36:50.999204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.647 ms 00:20:51.822 [2024-11-05 11:36:50.999210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:51.021525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:51.021557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:51.822 [2024-11-05 11:36:51.021566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.285 ms 00:20:51.822 [2024-11-05 11:36:51.021572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:51.043958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.822 [2024-11-05 11:36:51.043989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:51.822 [2024-11-05 11:36:51.043999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.335 ms 00:20:51.822 [2024-11-05 11:36:51.044006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.822 [2024-11-05 11:36:51.044036] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:51.822 [2024-11-05 11:36:51.044049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:51.822 [2024-11-05 11:36:51.044063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:51.823 [2024-11-05 11:36:51.044716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:51.824 [2024-11-05 11:36:51.044790] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:51.824 [2024-11-05 11:36:51.044798] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:20:51.824 [2024-11-05 11:36:51.044824] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:51.824 [2024-11-05 11:36:51.044831] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:51.824 [2024-11-05 11:36:51.044840] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:51.824 [2024-11-05 11:36:51.044848] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:51.824 [2024-11-05 11:36:51.044855] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:51.824 [2024-11-05 11:36:51.044862] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:51.824 [2024-11-05 11:36:51.044874] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:51.824 [2024-11-05 11:36:51.044881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:51.824 [2024-11-05 11:36:51.044887] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:51.824 [2024-11-05 11:36:51.044894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.824 [2024-11-05 11:36:51.044901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:51.824 [2024-11-05 11:36:51.044909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.859 ms 00:20:51.824 [2024-11-05 11:36:51.044915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.057232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.824 [2024-11-05 11:36:51.057262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:51.824 [2024-11-05 11:36:51.057272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.302 ms 00:20:51.824 [2024-11-05 11:36:51.057280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.057611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:51.824 [2024-11-05 11:36:51.057620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:51.824 [2024-11-05 11:36:51.057627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.315 ms 00:20:51.824 [2024-11-05 11:36:51.057634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.090303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.824 [2024-11-05 11:36:51.090335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:51.824 [2024-11-05 11:36:51.090344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.824 [2024-11-05 11:36:51.090352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.090399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.824 [2024-11-05 11:36:51.090407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:51.824 [2024-11-05 11:36:51.090414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.824 [2024-11-05 11:36:51.090421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.090470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.824 [2024-11-05 11:36:51.090479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:51.824 [2024-11-05 11:36:51.090487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.824 [2024-11-05 11:36:51.090494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:51.824 [2024-11-05 11:36:51.090508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:51.824 [2024-11-05 11:36:51.090515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:51.824 [2024-11-05 11:36:51.090523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:51.824 [2024-11-05 11:36:51.090529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.082 [2024-11-05 11:36:51.167406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.082 [2024-11-05 11:36:51.167446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.082 [2024-11-05 11:36:51.167456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.082 [2024-11-05 11:36:51.167463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.082 [2024-11-05 11:36:51.230297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.082 [2024-11-05 11:36:51.230336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.082 [2024-11-05 11:36:51.230345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.082 [2024-11-05 11:36:51.230352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.082 [2024-11-05 11:36:51.230417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.082 [2024-11-05 11:36:51.230426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.082 [2024-11-05 11:36:51.230434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.082 [2024-11-05 11:36:51.230442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.082 [2024-11-05 11:36:51.230471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.082 [2024-11-05 11:36:51.230480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.082 [2024-11-05 11:36:51.230487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.082 [2024-11-05 11:36:51.230495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.082 [2024-11-05 11:36:51.230576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.082 [2024-11-05 11:36:51.230589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.083 [2024-11-05 11:36:51.230596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.083 [2024-11-05 11:36:51.230604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.083 [2024-11-05 11:36:51.230630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.083 [2024-11-05 11:36:51.230638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.083 [2024-11-05 11:36:51.230646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.083 [2024-11-05 11:36:51.230653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.083 [2024-11-05 11:36:51.230692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.083 [2024-11-05 11:36:51.230704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.083 [2024-11-05 11:36:51.230711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.083 [2024-11-05 11:36:51.230718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.083 [2024-11-05 11:36:51.230754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.083 [2024-11-05 11:36:51.230764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.083 [2024-11-05 11:36:51.230771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.083 [2024-11-05 11:36:51.230778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.083 [2024-11-05 11:36:51.230901] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 331.703 ms, result 0 00:20:52.649 00:20:52.649 00:20:52.649 11:36:51 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:55.177 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:55.177 11:36:54 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:20:55.177 [2024-11-05 11:36:54.103930] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:20:55.177 [2024-11-05 11:36:54.104050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75817 ] 00:20:55.177 [2024-11-05 11:36:54.264644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.177 [2024-11-05 11:36:54.358041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.437 [2024-11-05 11:36:54.606266] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.437 [2024-11-05 11:36:54.606327] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:55.699 [2024-11-05 11:36:54.763254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.763301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:55.699 [2024-11-05 11:36:54.763317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.699 [2024-11-05 11:36:54.763325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.763370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.763380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:55.699 [2024-11-05 11:36:54.763391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:20:55.699 [2024-11-05 11:36:54.763398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.763417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:55.699 [2024-11-05 11:36:54.764095] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:55.699 [2024-11-05 11:36:54.764120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.764128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:55.699 [2024-11-05 11:36:54.764137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.708 ms 00:20:55.699 [2024-11-05 11:36:54.764144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.765216] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:55.699 [2024-11-05 11:36:54.778155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.778192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:55.699 [2024-11-05 11:36:54.778204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.940 ms 00:20:55.699 [2024-11-05 11:36:54.778211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.778260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.778272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:55.699 [2024-11-05 11:36:54.778280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:55.699 [2024-11-05 11:36:54.778287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.783392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.783423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:55.699 [2024-11-05 11:36:54.783432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.056 ms 00:20:55.699 [2024-11-05 11:36:54.783443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.783507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.783515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:55.699 [2024-11-05 11:36:54.783523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:55.699 [2024-11-05 11:36:54.783530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.783575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.783585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:55.699 [2024-11-05 11:36:54.783592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:55.699 [2024-11-05 11:36:54.783599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.699 [2024-11-05 11:36:54.783621] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:55.699 [2024-11-05 11:36:54.787038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.699 [2024-11-05 11:36:54.787066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:55.700 [2024-11-05 11:36:54.787075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.424 ms 00:20:55.700 [2024-11-05 11:36:54.787085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.700 [2024-11-05 11:36:54.787111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.700 [2024-11-05 11:36:54.787123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:55.700 [2024-11-05 11:36:54.787131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:55.700 [2024-11-05 11:36:54.787137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.700 [2024-11-05 11:36:54.787156] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:55.700 [2024-11-05 11:36:54.787173] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:55.700 [2024-11-05 11:36:54.787207] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:55.700 [2024-11-05 11:36:54.787225] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:55.700 [2024-11-05 11:36:54.787327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:55.700 [2024-11-05 11:36:54.787337] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:55.700 [2024-11-05 11:36:54.787347] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:55.700 [2024-11-05 11:36:54.787356] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787364] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787372] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:55.700 [2024-11-05 11:36:54.787379] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:55.700 [2024-11-05 11:36:54.787385] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:55.700 [2024-11-05 11:36:54.787395] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:55.700 [2024-11-05 11:36:54.787402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.700 [2024-11-05 11:36:54.787410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:55.700 [2024-11-05 11:36:54.787417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:55.700 [2024-11-05 11:36:54.787423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.700 [2024-11-05 11:36:54.787508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.700 [2024-11-05 11:36:54.787515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:55.700 [2024-11-05 11:36:54.787522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:20:55.700 [2024-11-05 11:36:54.787529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.700 [2024-11-05 11:36:54.787641] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:55.700 [2024-11-05 11:36:54.787652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:55.700 [2024-11-05 11:36:54.787660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787668] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:55.700 [2024-11-05 11:36:54.787682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:55.700 [2024-11-05 11:36:54.787703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787709] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.700 [2024-11-05 11:36:54.787718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:55.700 [2024-11-05 11:36:54.787725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:55.700 [2024-11-05 11:36:54.787731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:55.700 [2024-11-05 11:36:54.787738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:55.700 [2024-11-05 11:36:54.787745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:55.700 [2024-11-05 11:36:54.787756] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787762] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:55.700 [2024-11-05 11:36:54.787769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:55.700 [2024-11-05 11:36:54.787788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:55.700 [2024-11-05 11:36:54.787820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:55.700 [2024-11-05 11:36:54.787839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787845] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:55.700 [2024-11-05 11:36:54.787858] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:55.700 [2024-11-05 11:36:54.787877] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787884] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.700 [2024-11-05 11:36:54.787890] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:55.700 [2024-11-05 11:36:54.787897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:55.700 [2024-11-05 11:36:54.787903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:55.700 [2024-11-05 11:36:54.787909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:55.700 [2024-11-05 11:36:54.787916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:55.700 [2024-11-05 11:36:54.787922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:55.700 [2024-11-05 11:36:54.787935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:55.700 [2024-11-05 11:36:54.787941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787948] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:55.700 [2024-11-05 11:36:54.787959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:55.700 [2024-11-05 11:36:54.787966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:55.700 [2024-11-05 11:36:54.787972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:55.700 [2024-11-05 11:36:54.787979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:55.700 [2024-11-05 11:36:54.787986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:55.700 [2024-11-05 11:36:54.787993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:55.700 [2024-11-05 11:36:54.787999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:55.700 [2024-11-05 11:36:54.788005] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:55.700 [2024-11-05 11:36:54.788012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:55.700 [2024-11-05 11:36:54.788020] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:55.700 [2024-11-05 11:36:54.788029] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:55.700 [2024-11-05 11:36:54.788044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:55.700 [2024-11-05 11:36:54.788051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:55.700 [2024-11-05 11:36:54.788058] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:55.700 [2024-11-05 11:36:54.788065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:55.700 [2024-11-05 11:36:54.788072] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:55.700 [2024-11-05 11:36:54.788079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:55.700 [2024-11-05 11:36:54.788086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:55.700 [2024-11-05 11:36:54.788093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:55.700 [2024-11-05 11:36:54.788099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788106] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788113] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788120] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:55.700 [2024-11-05 11:36:54.788134] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:55.700 [2024-11-05 11:36:54.788144] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:55.700 [2024-11-05 11:36:54.788152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:55.701 [2024-11-05 11:36:54.788159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:55.701 [2024-11-05 11:36:54.788166] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:55.701 [2024-11-05 11:36:54.788174] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:55.701 [2024-11-05 11:36:54.788182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.788189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:55.701 [2024-11-05 11:36:54.788196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.611 ms 00:20:55.701 [2024-11-05 11:36:54.788203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.814069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.814103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:55.701 [2024-11-05 11:36:54.814113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.824 ms 00:20:55.701 [2024-11-05 11:36:54.814121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.814205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.814213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:55.701 [2024-11-05 11:36:54.814220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:20:55.701 [2024-11-05 11:36:54.814228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.854464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.854504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:55.701 [2024-11-05 11:36:54.854515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.188 ms 00:20:55.701 [2024-11-05 11:36:54.854523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.854560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.854569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:55.701 [2024-11-05 11:36:54.854581] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:55.701 [2024-11-05 11:36:54.854588] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.854981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.855006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:55.701 [2024-11-05 11:36:54.855015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:20:55.701 [2024-11-05 11:36:54.855022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.855140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.855149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:55.701 [2024-11-05 11:36:54.855157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:20:55.701 [2024-11-05 11:36:54.855169] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.868278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.868306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:55.701 [2024-11-05 11:36:54.868316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.090 ms 00:20:55.701 [2024-11-05 11:36:54.868326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.881061] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:55.701 [2024-11-05 11:36:54.881095] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:55.701 [2024-11-05 11:36:54.881107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.881115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:55.701 [2024-11-05 11:36:54.881123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.682 ms 00:20:55.701 [2024-11-05 11:36:54.881130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.905210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.905243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:55.701 [2024-11-05 11:36:54.905254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.044 ms 00:20:55.701 [2024-11-05 11:36:54.905262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.916582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.916615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:55.701 [2024-11-05 11:36:54.916624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.284 ms 00:20:55.701 [2024-11-05 11:36:54.916631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.928008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.928040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:55.701 [2024-11-05 11:36:54.928050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.347 ms 00:20:55.701 [2024-11-05 11:36:54.928056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.701 [2024-11-05 11:36:54.928647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.701 [2024-11-05 11:36:54.928670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:55.701 [2024-11-05 11:36:54.928679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.513 ms 00:20:55.701 [2024-11-05 11:36:54.928688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.983729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.983774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:55.963 [2024-11-05 11:36:54.983790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.025 ms 00:20:55.963 [2024-11-05 11:36:54.983799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.994157] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:55.963 [2024-11-05 11:36:54.996240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.996272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:55.963 [2024-11-05 11:36:54.996284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.387 ms 00:20:55.963 [2024-11-05 11:36:54.996292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.996375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.996387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:55.963 [2024-11-05 11:36:54.996397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:55.963 [2024-11-05 11:36:54.996407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.996474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.996485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:55.963 [2024-11-05 11:36:54.996494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:55.963 [2024-11-05 11:36:54.996502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.996522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.996531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:55.963 [2024-11-05 11:36:54.996540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:55.963 [2024-11-05 11:36:54.996549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:54.996582] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:55.963 [2024-11-05 11:36:54.996592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:54.996599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:55.963 [2024-11-05 11:36:54.996607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:55.963 [2024-11-05 11:36:54.996615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:55.019352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:55.019386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:55.963 [2024-11-05 11:36:55.019397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.721 ms 00:20:55.963 [2024-11-05 11:36:55.019408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:55.019474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:55.963 [2024-11-05 11:36:55.019483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:55.963 [2024-11-05 11:36:55.019492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:55.963 [2024-11-05 11:36:55.019499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:55.963 [2024-11-05 11:36:55.020469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 256.810 ms, result 0 00:20:56.901  [2024-11-05T11:36:57.108Z] Copying: 22/1024 [MB] (22 MBps) [2024-11-05T11:36:58.043Z] Copying: 46/1024 [MB] (23 MBps) [2024-11-05T11:36:59.415Z] Copying: 80/1024 [MB] (34 MBps) [2024-11-05T11:37:00.348Z] Copying: 100/1024 [MB] (19 MBps) [2024-11-05T11:37:01.281Z] Copying: 117/1024 [MB] (16 MBps) [2024-11-05T11:37:02.213Z] Copying: 168/1024 [MB] (51 MBps) [2024-11-05T11:37:03.151Z] Copying: 206/1024 [MB] (38 MBps) [2024-11-05T11:37:04.083Z] Copying: 226/1024 [MB] (19 MBps) [2024-11-05T11:37:05.455Z] Copying: 265/1024 [MB] (38 MBps) [2024-11-05T11:37:06.388Z] Copying: 286/1024 [MB] (21 MBps) [2024-11-05T11:37:07.320Z] Copying: 309/1024 [MB] (22 MBps) [2024-11-05T11:37:08.253Z] Copying: 330/1024 [MB] (21 MBps) [2024-11-05T11:37:09.186Z] Copying: 350/1024 [MB] (19 MBps) [2024-11-05T11:37:10.118Z] Copying: 377/1024 [MB] (27 MBps) [2024-11-05T11:37:11.052Z] Copying: 410/1024 [MB] (33 MBps) [2024-11-05T11:37:12.427Z] Copying: 430/1024 [MB] (20 MBps) [2024-11-05T11:37:13.362Z] Copying: 450/1024 [MB] (19 MBps) [2024-11-05T11:37:14.296Z] Copying: 472/1024 [MB] (22 MBps) [2024-11-05T11:37:15.231Z] Copying: 495/1024 [MB] (23 MBps) [2024-11-05T11:37:16.166Z] Copying: 516/1024 [MB] (20 MBps) [2024-11-05T11:37:17.100Z] Copying: 531/1024 [MB] (15 MBps) [2024-11-05T11:37:18.035Z] Copying: 542/1024 [MB] (11 MBps) [2024-11-05T11:37:19.408Z] Copying: 553/1024 [MB] (11 MBps) [2024-11-05T11:37:20.340Z] Copying: 564/1024 [MB] (11 MBps) [2024-11-05T11:37:21.274Z] Copying: 575/1024 [MB] (11 MBps) [2024-11-05T11:37:22.207Z] Copying: 594/1024 [MB] (18 MBps) [2024-11-05T11:37:23.140Z] Copying: 608/1024 [MB] (14 MBps) [2024-11-05T11:37:24.071Z] Copying: 627/1024 [MB] (18 MBps) [2024-11-05T11:37:25.443Z] Copying: 639/1024 [MB] (12 MBps) [2024-11-05T11:37:26.424Z] Copying: 650/1024 [MB] (10 MBps) [2024-11-05T11:37:27.358Z] Copying: 661/1024 [MB] (11 MBps) [2024-11-05T11:37:28.292Z] Copying: 673/1024 [MB] (11 MBps) [2024-11-05T11:37:29.225Z] Copying: 684/1024 [MB] (11 MBps) [2024-11-05T11:37:30.159Z] Copying: 695/1024 [MB] (11 MBps) [2024-11-05T11:37:31.095Z] Copying: 706/1024 [MB] (11 MBps) [2024-11-05T11:37:32.036Z] Copying: 717/1024 [MB] (10 MBps) [2024-11-05T11:37:33.417Z] Copying: 767/1024 [MB] (50 MBps) [2024-11-05T11:37:34.355Z] Copying: 809/1024 [MB] (41 MBps) [2024-11-05T11:37:35.300Z] Copying: 821/1024 [MB] (12 MBps) [2024-11-05T11:37:36.243Z] Copying: 834/1024 [MB] (12 MBps) [2024-11-05T11:37:37.187Z] Copying: 844/1024 [MB] (10 MBps) [2024-11-05T11:37:38.131Z] Copying: 855/1024 [MB] (10 MBps) [2024-11-05T11:37:39.075Z] Copying: 880/1024 [MB] (25 MBps) [2024-11-05T11:37:40.463Z] Copying: 933/1024 [MB] (53 MBps) [2024-11-05T11:37:41.036Z] Copying: 955/1024 [MB] (21 MBps) [2024-11-05T11:37:42.423Z] Copying: 970/1024 [MB] (14 MBps) [2024-11-05T11:37:43.362Z] Copying: 982/1024 [MB] (11 MBps) [2024-11-05T11:37:44.306Z] Copying: 995/1024 [MB] (13 MBps) [2024-11-05T11:37:44.567Z] Copying: 1023/1024 [MB] (28 MBps) [2024-11-05T11:37:44.567Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-11-05 11:37:44.407616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.407798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:45.294 [2024-11-05 11:37:44.407831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:45.294 [2024-11-05 11:37:44.407850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.408818] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:45.294 [2024-11-05 11:37:44.412140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.412182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:45.294 [2024-11-05 11:37:44.412194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.292 ms 00:21:45.294 [2024-11-05 11:37:44.412203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.424204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.424255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:45.294 [2024-11-05 11:37:44.424269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.834 ms 00:21:45.294 [2024-11-05 11:37:44.424278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.449770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.449823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:45.294 [2024-11-05 11:37:44.449835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.468 ms 00:21:45.294 [2024-11-05 11:37:44.449843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.455970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.456008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:45.294 [2024-11-05 11:37:44.456019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.094 ms 00:21:45.294 [2024-11-05 11:37:44.456027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.481889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.481933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:45.294 [2024-11-05 11:37:44.481946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.818 ms 00:21:45.294 [2024-11-05 11:37:44.481955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.294 [2024-11-05 11:37:44.497418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.294 [2024-11-05 11:37:44.497471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:45.294 [2024-11-05 11:37:44.497483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.420 ms 00:21:45.294 [2024-11-05 11:37:44.497491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.651414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.557 [2024-11-05 11:37:44.651484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:45.557 [2024-11-05 11:37:44.651498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 153.871 ms 00:21:45.557 [2024-11-05 11:37:44.651507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.676667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.557 [2024-11-05 11:37:44.676714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:45.557 [2024-11-05 11:37:44.676728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.143 ms 00:21:45.557 [2024-11-05 11:37:44.676736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.702072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.557 [2024-11-05 11:37:44.702131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:45.557 [2024-11-05 11:37:44.702144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.289 ms 00:21:45.557 [2024-11-05 11:37:44.702152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.726914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.557 [2024-11-05 11:37:44.726959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:45.557 [2024-11-05 11:37:44.726971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.716 ms 00:21:45.557 [2024-11-05 11:37:44.726979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.751758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.557 [2024-11-05 11:37:44.751814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:45.557 [2024-11-05 11:37:44.751827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.707 ms 00:21:45.557 [2024-11-05 11:37:44.751837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.557 [2024-11-05 11:37:44.751880] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:45.557 [2024-11-05 11:37:44.751897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 113920 / 261120 wr_cnt: 1 state: open 00:21:45.557 [2024-11-05 11:37:44.751909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.751999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:45.557 [2024-11-05 11:37:44.752053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:45.558 [2024-11-05 11:37:44.752723] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:45.558 [2024-11-05 11:37:44.752732] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:21:45.558 [2024-11-05 11:37:44.752741] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 113920 00:21:45.558 [2024-11-05 11:37:44.752750] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 114880 00:21:45.558 [2024-11-05 11:37:44.752757] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 113920 00:21:45.558 [2024-11-05 11:37:44.752766] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0084 00:21:45.558 [2024-11-05 11:37:44.752773] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:45.558 [2024-11-05 11:37:44.752784] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:45.558 [2024-11-05 11:37:44.752810] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:45.558 [2024-11-05 11:37:44.752818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:45.558 [2024-11-05 11:37:44.752824] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:45.558 [2024-11-05 11:37:44.752832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.558 [2024-11-05 11:37:44.752841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:45.559 [2024-11-05 11:37:44.752851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:21:45.559 [2024-11-05 11:37:44.752859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.766217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.559 [2024-11-05 11:37:44.766260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:45.559 [2024-11-05 11:37:44.766273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.327 ms 00:21:45.559 [2024-11-05 11:37:44.766289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.766679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:45.559 [2024-11-05 11:37:44.766700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:45.559 [2024-11-05 11:37:44.766711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:21:45.559 [2024-11-05 11:37:44.766718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.803217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.559 [2024-11-05 11:37:44.803267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:45.559 [2024-11-05 11:37:44.803278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.559 [2024-11-05 11:37:44.803287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.803350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.559 [2024-11-05 11:37:44.803359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:45.559 [2024-11-05 11:37:44.803367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.559 [2024-11-05 11:37:44.803375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.803443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.559 [2024-11-05 11:37:44.803454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:45.559 [2024-11-05 11:37:44.803468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.559 [2024-11-05 11:37:44.803475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.559 [2024-11-05 11:37:44.803491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.559 [2024-11-05 11:37:44.803500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:45.559 [2024-11-05 11:37:44.803508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.559 [2024-11-05 11:37:44.803516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.820 [2024-11-05 11:37:44.886632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.886691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:45.821 [2024-11-05 11:37:44.886711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.886720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:45.821 [2024-11-05 11:37:44.955162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:45.821 [2024-11-05 11:37:44.955273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:45.821 [2024-11-05 11:37:44.955346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:45.821 [2024-11-05 11:37:44.955641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:45.821 [2024-11-05 11:37:44.955711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:45.821 [2024-11-05 11:37:44.955779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.955861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:45.821 [2024-11-05 11:37:44.955879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:45.821 [2024-11-05 11:37:44.955888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:45.821 [2024-11-05 11:37:44.955896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:45.821 [2024-11-05 11:37:44.956031] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 552.305 ms, result 0 00:21:47.738 00:21:47.738 00:21:47.738 11:37:46 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:21:47.738 [2024-11-05 11:37:46.720066] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:21:47.738 [2024-11-05 11:37:46.720218] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76357 ] 00:21:47.738 [2024-11-05 11:37:46.884867] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.738 [2024-11-05 11:37:47.001485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.313 [2024-11-05 11:37:47.350496] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.313 [2024-11-05 11:37:47.350578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:48.313 [2024-11-05 11:37:47.511093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.511159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:48.313 [2024-11-05 11:37:47.511178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:48.313 [2024-11-05 11:37:47.511186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.511237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.511247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:48.313 [2024-11-05 11:37:47.511258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:48.313 [2024-11-05 11:37:47.511267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.511286] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:48.313 [2024-11-05 11:37:47.511973] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:48.313 [2024-11-05 11:37:47.512001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.512009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:48.313 [2024-11-05 11:37:47.512019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.719 ms 00:21:48.313 [2024-11-05 11:37:47.512027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.513639] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:48.313 [2024-11-05 11:37:47.527721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.527776] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:48.313 [2024-11-05 11:37:47.527790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.084 ms 00:21:48.313 [2024-11-05 11:37:47.527798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.527894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.527907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:48.313 [2024-11-05 11:37:47.527917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:21:48.313 [2024-11-05 11:37:47.527925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.536000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.536044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:48.313 [2024-11-05 11:37:47.536055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.000 ms 00:21:48.313 [2024-11-05 11:37:47.536063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.536146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.536155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:48.313 [2024-11-05 11:37:47.536164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:21:48.313 [2024-11-05 11:37:47.536172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.536214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.536225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:48.313 [2024-11-05 11:37:47.536234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:21:48.313 [2024-11-05 11:37:47.536242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.536266] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:48.313 [2024-11-05 11:37:47.540336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.540380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:48.313 [2024-11-05 11:37:47.540391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.074 ms 00:21:48.313 [2024-11-05 11:37:47.540402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.540437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.540447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:48.313 [2024-11-05 11:37:47.540455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:48.313 [2024-11-05 11:37:47.540463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.540515] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:48.313 [2024-11-05 11:37:47.540537] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:48.313 [2024-11-05 11:37:47.540575] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:48.313 [2024-11-05 11:37:47.540594] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:48.313 [2024-11-05 11:37:47.540702] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:48.313 [2024-11-05 11:37:47.540713] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:48.313 [2024-11-05 11:37:47.540724] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:48.313 [2024-11-05 11:37:47.540735] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:48.313 [2024-11-05 11:37:47.540744] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:48.313 [2024-11-05 11:37:47.540754] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:48.313 [2024-11-05 11:37:47.540762] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:48.313 [2024-11-05 11:37:47.540770] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:48.313 [2024-11-05 11:37:47.540778] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:48.313 [2024-11-05 11:37:47.540789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.540797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:48.313 [2024-11-05 11:37:47.540821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.277 ms 00:21:48.313 [2024-11-05 11:37:47.540829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.540912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.313 [2024-11-05 11:37:47.540922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:48.313 [2024-11-05 11:37:47.540930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:48.313 [2024-11-05 11:37:47.540938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.313 [2024-11-05 11:37:47.541043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:48.313 [2024-11-05 11:37:47.541068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:48.313 [2024-11-05 11:37:47.541077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.313 [2024-11-05 11:37:47.541086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.313 [2024-11-05 11:37:47.541095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:48.313 [2024-11-05 11:37:47.541102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:48.313 [2024-11-05 11:37:47.541109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:48.313 [2024-11-05 11:37:47.541116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:48.313 [2024-11-05 11:37:47.541125] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:48.313 [2024-11-05 11:37:47.541133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.313 [2024-11-05 11:37:47.541140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:48.313 [2024-11-05 11:37:47.541148] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:48.313 [2024-11-05 11:37:47.541155] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:48.313 [2024-11-05 11:37:47.541163] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:48.313 [2024-11-05 11:37:47.541171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:48.314 [2024-11-05 11:37:47.541184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:48.314 [2024-11-05 11:37:47.541199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:48.314 [2024-11-05 11:37:47.541221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541228] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:48.314 [2024-11-05 11:37:47.541243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541250] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:48.314 [2024-11-05 11:37:47.541265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:48.314 [2024-11-05 11:37:47.541285] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:48.314 [2024-11-05 11:37:47.541307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.314 [2024-11-05 11:37:47.541320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:48.314 [2024-11-05 11:37:47.541326] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:48.314 [2024-11-05 11:37:47.541333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:48.314 [2024-11-05 11:37:47.541339] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:48.314 [2024-11-05 11:37:47.541345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:48.314 [2024-11-05 11:37:47.541351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:48.314 [2024-11-05 11:37:47.541365] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:48.314 [2024-11-05 11:37:47.541371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541378] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:48.314 [2024-11-05 11:37:47.541385] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:48.314 [2024-11-05 11:37:47.541394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:48.314 [2024-11-05 11:37:47.541411] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:48.314 [2024-11-05 11:37:47.541418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:48.314 [2024-11-05 11:37:47.541425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:48.314 [2024-11-05 11:37:47.541432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:48.314 [2024-11-05 11:37:47.541439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:48.314 [2024-11-05 11:37:47.541446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:48.314 [2024-11-05 11:37:47.541454] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:48.314 [2024-11-05 11:37:47.541463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:48.314 [2024-11-05 11:37:47.541480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:48.314 [2024-11-05 11:37:47.541488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:48.314 [2024-11-05 11:37:47.541496] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:48.314 [2024-11-05 11:37:47.541504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:48.314 [2024-11-05 11:37:47.541510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:48.314 [2024-11-05 11:37:47.541519] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:48.314 [2024-11-05 11:37:47.541526] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:48.314 [2024-11-05 11:37:47.541533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:48.314 [2024-11-05 11:37:47.541541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541572] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:48.314 [2024-11-05 11:37:47.541579] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:48.314 [2024-11-05 11:37:47.541590] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541601] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:48.314 [2024-11-05 11:37:47.541609] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:48.314 [2024-11-05 11:37:47.541617] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:48.314 [2024-11-05 11:37:47.541625] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:48.314 [2024-11-05 11:37:47.541633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.314 [2024-11-05 11:37:47.541641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:48.314 [2024-11-05 11:37:47.541652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.658 ms 00:21:48.314 [2024-11-05 11:37:47.541660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.314 [2024-11-05 11:37:47.573173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.314 [2024-11-05 11:37:47.573227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:48.314 [2024-11-05 11:37:47.573238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.469 ms 00:21:48.314 [2024-11-05 11:37:47.573248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.314 [2024-11-05 11:37:47.573332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.314 [2024-11-05 11:37:47.573346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:48.314 [2024-11-05 11:37:47.573355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:48.314 [2024-11-05 11:37:47.573363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.622343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.622403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:48.577 [2024-11-05 11:37:47.622416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.922 ms 00:21:48.577 [2024-11-05 11:37:47.622425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.622474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.622486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:48.577 [2024-11-05 11:37:47.622495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:48.577 [2024-11-05 11:37:47.622507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.623195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.623234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:48.577 [2024-11-05 11:37:47.623245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.612 ms 00:21:48.577 [2024-11-05 11:37:47.623253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.623409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.623420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:48.577 [2024-11-05 11:37:47.623429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:48.577 [2024-11-05 11:37:47.623438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.638955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.639003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:48.577 [2024-11-05 11:37:47.639015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.491 ms 00:21:48.577 [2024-11-05 11:37:47.639026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.653234] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:48.577 [2024-11-05 11:37:47.653287] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:48.577 [2024-11-05 11:37:47.653301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.653310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:48.577 [2024-11-05 11:37:47.653320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.166 ms 00:21:48.577 [2024-11-05 11:37:47.653327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.678966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.577 [2024-11-05 11:37:47.679029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:48.577 [2024-11-05 11:37:47.679041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.587 ms 00:21:48.577 [2024-11-05 11:37:47.679050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.577 [2024-11-05 11:37:47.691898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.691957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:48.578 [2024-11-05 11:37:47.691969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.790 ms 00:21:48.578 [2024-11-05 11:37:47.691976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.704192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.704240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:48.578 [2024-11-05 11:37:47.704252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.170 ms 00:21:48.578 [2024-11-05 11:37:47.704258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.704942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.704973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:48.578 [2024-11-05 11:37:47.704984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.576 ms 00:21:48.578 [2024-11-05 11:37:47.704993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.768978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.769051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:48.578 [2024-11-05 11:37:47.769067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.962 ms 00:21:48.578 [2024-11-05 11:37:47.769082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.781144] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:48.578 [2024-11-05 11:37:47.784079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.784126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:48.578 [2024-11-05 11:37:47.784138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.941 ms 00:21:48.578 [2024-11-05 11:37:47.784147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.784228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.784239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:48.578 [2024-11-05 11:37:47.784249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:21:48.578 [2024-11-05 11:37:47.784257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.786059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.786103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:48.578 [2024-11-05 11:37:47.786113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.761 ms 00:21:48.578 [2024-11-05 11:37:47.786121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.786150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.786158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:48.578 [2024-11-05 11:37:47.786167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:48.578 [2024-11-05 11:37:47.786175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.786218] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:48.578 [2024-11-05 11:37:47.786231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.786239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:48.578 [2024-11-05 11:37:47.786249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:48.578 [2024-11-05 11:37:47.786258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.811268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.811323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:48.578 [2024-11-05 11:37:47.811336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.992 ms 00:21:48.578 [2024-11-05 11:37:47.811345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.811439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:48.578 [2024-11-05 11:37:47.811449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:48.578 [2024-11-05 11:37:47.811458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:21:48.578 [2024-11-05 11:37:47.811466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:48.578 [2024-11-05 11:37:47.812674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.107 ms, result 0 00:21:49.968  [2024-11-05T11:37:50.208Z] Copying: 15/1024 [MB] (15 MBps) [2024-11-05T11:37:51.152Z] Copying: 37/1024 [MB] (21 MBps) [2024-11-05T11:37:52.093Z] Copying: 55/1024 [MB] (17 MBps) [2024-11-05T11:37:53.036Z] Copying: 67/1024 [MB] (12 MBps) [2024-11-05T11:37:54.426Z] Copying: 82/1024 [MB] (14 MBps) [2024-11-05T11:37:55.370Z] Copying: 101/1024 [MB] (18 MBps) [2024-11-05T11:37:56.315Z] Copying: 125/1024 [MB] (24 MBps) [2024-11-05T11:37:57.261Z] Copying: 146/1024 [MB] (20 MBps) [2024-11-05T11:37:58.206Z] Copying: 167/1024 [MB] (21 MBps) [2024-11-05T11:37:59.151Z] Copying: 190/1024 [MB] (23 MBps) [2024-11-05T11:38:00.090Z] Copying: 206/1024 [MB] (15 MBps) [2024-11-05T11:38:01.032Z] Copying: 222/1024 [MB] (16 MBps) [2024-11-05T11:38:02.419Z] Copying: 241/1024 [MB] (19 MBps) [2024-11-05T11:38:03.363Z] Copying: 262/1024 [MB] (20 MBps) [2024-11-05T11:38:04.304Z] Copying: 281/1024 [MB] (19 MBps) [2024-11-05T11:38:05.247Z] Copying: 304/1024 [MB] (22 MBps) [2024-11-05T11:38:06.191Z] Copying: 326/1024 [MB] (22 MBps) [2024-11-05T11:38:07.133Z] Copying: 345/1024 [MB] (19 MBps) [2024-11-05T11:38:08.076Z] Copying: 356/1024 [MB] (10 MBps) [2024-11-05T11:38:09.025Z] Copying: 367/1024 [MB] (10 MBps) [2024-11-05T11:38:10.423Z] Copying: 377/1024 [MB] (10 MBps) [2024-11-05T11:38:11.362Z] Copying: 388/1024 [MB] (10 MBps) [2024-11-05T11:38:12.305Z] Copying: 399/1024 [MB] (10 MBps) [2024-11-05T11:38:13.245Z] Copying: 411/1024 [MB] (12 MBps) [2024-11-05T11:38:14.190Z] Copying: 422/1024 [MB] (10 MBps) [2024-11-05T11:38:15.135Z] Copying: 432/1024 [MB] (10 MBps) [2024-11-05T11:38:16.079Z] Copying: 443/1024 [MB] (10 MBps) [2024-11-05T11:38:17.036Z] Copying: 455/1024 [MB] (12 MBps) [2024-11-05T11:38:18.421Z] Copying: 467/1024 [MB] (12 MBps) [2024-11-05T11:38:19.364Z] Copying: 478/1024 [MB] (10 MBps) [2024-11-05T11:38:20.305Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-05T11:38:21.247Z] Copying: 503/1024 [MB] (14 MBps) [2024-11-05T11:38:22.181Z] Copying: 520/1024 [MB] (17 MBps) [2024-11-05T11:38:23.118Z] Copying: 543/1024 [MB] (22 MBps) [2024-11-05T11:38:24.104Z] Copying: 565/1024 [MB] (21 MBps) [2024-11-05T11:38:25.046Z] Copying: 588/1024 [MB] (23 MBps) [2024-11-05T11:38:26.430Z] Copying: 606/1024 [MB] (17 MBps) [2024-11-05T11:38:27.373Z] Copying: 623/1024 [MB] (16 MBps) [2024-11-05T11:38:28.308Z] Copying: 639/1024 [MB] (16 MBps) [2024-11-05T11:38:29.242Z] Copying: 658/1024 [MB] (19 MBps) [2024-11-05T11:38:30.185Z] Copying: 678/1024 [MB] (20 MBps) [2024-11-05T11:38:31.131Z] Copying: 696/1024 [MB] (18 MBps) [2024-11-05T11:38:32.074Z] Copying: 712/1024 [MB] (15 MBps) [2024-11-05T11:38:33.009Z] Copying: 729/1024 [MB] (16 MBps) [2024-11-05T11:38:34.392Z] Copying: 740/1024 [MB] (11 MBps) [2024-11-05T11:38:35.333Z] Copying: 751/1024 [MB] (11 MBps) [2024-11-05T11:38:36.280Z] Copying: 762/1024 [MB] (10 MBps) [2024-11-05T11:38:37.224Z] Copying: 772/1024 [MB] (10 MBps) [2024-11-05T11:38:38.166Z] Copying: 783/1024 [MB] (10 MBps) [2024-11-05T11:38:39.105Z] Copying: 793/1024 [MB] (10 MBps) [2024-11-05T11:38:40.047Z] Copying: 804/1024 [MB] (10 MBps) [2024-11-05T11:38:41.431Z] Copying: 815/1024 [MB] (10 MBps) [2024-11-05T11:38:42.370Z] Copying: 825/1024 [MB] (10 MBps) [2024-11-05T11:38:43.310Z] Copying: 836/1024 [MB] (10 MBps) [2024-11-05T11:38:44.250Z] Copying: 847/1024 [MB] (10 MBps) [2024-11-05T11:38:45.205Z] Copying: 869/1024 [MB] (22 MBps) [2024-11-05T11:38:46.149Z] Copying: 885/1024 [MB] (16 MBps) [2024-11-05T11:38:47.090Z] Copying: 900/1024 [MB] (15 MBps) [2024-11-05T11:38:48.034Z] Copying: 920/1024 [MB] (19 MBps) [2024-11-05T11:38:49.417Z] Copying: 939/1024 [MB] (19 MBps) [2024-11-05T11:38:50.362Z] Copying: 966/1024 [MB] (26 MBps) [2024-11-05T11:38:51.300Z] Copying: 986/1024 [MB] (20 MBps) [2024-11-05T11:38:52.241Z] Copying: 1007/1024 [MB] (20 MBps) [2024-11-05T11:38:52.241Z] Copying: 1020/1024 [MB] (12 MBps) [2024-11-05T11:38:52.814Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-11-05 11:38:52.576963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.540 [2024-11-05 11:38:52.577050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:53.540 [2024-11-05 11:38:52.577067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:53.540 [2024-11-05 11:38:52.577077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.540 [2024-11-05 11:38:52.577107] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:53.540 [2024-11-05 11:38:52.580140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.540 [2024-11-05 11:38:52.580200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:53.540 [2024-11-05 11:38:52.580212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:22:53.540 [2024-11-05 11:38:52.580220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.540 [2024-11-05 11:38:52.580464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.540 [2024-11-05 11:38:52.580475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:53.540 [2024-11-05 11:38:52.580485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.215 ms 00:22:53.540 [2024-11-05 11:38:52.580493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.540 [2024-11-05 11:38:52.585693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.540 [2024-11-05 11:38:52.585754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:53.540 [2024-11-05 11:38:52.585765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:22:53.540 [2024-11-05 11:38:52.585773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.540 [2024-11-05 11:38:52.592168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.540 [2024-11-05 11:38:52.592205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:53.540 [2024-11-05 11:38:52.592216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.335 ms 00:22:53.541 [2024-11-05 11:38:52.592225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.541 [2024-11-05 11:38:52.621687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.541 [2024-11-05 11:38:52.621733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:53.541 [2024-11-05 11:38:52.621747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.382 ms 00:22:53.541 [2024-11-05 11:38:52.621756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.541 [2024-11-05 11:38:52.642528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.541 [2024-11-05 11:38:52.642575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:53.541 [2024-11-05 11:38:52.642597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.709 ms 00:22:53.541 [2024-11-05 11:38:52.642606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.802 [2024-11-05 11:38:52.987859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.802 [2024-11-05 11:38:52.987926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:53.802 [2024-11-05 11:38:52.987943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 345.213 ms 00:22:53.802 [2024-11-05 11:38:52.987954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.802 [2024-11-05 11:38:53.014517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.802 [2024-11-05 11:38:53.014573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:53.802 [2024-11-05 11:38:53.014587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.546 ms 00:22:53.802 [2024-11-05 11:38:53.014596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.802 [2024-11-05 11:38:53.039908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.802 [2024-11-05 11:38:53.039958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:53.802 [2024-11-05 11:38:53.039982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.262 ms 00:22:53.802 [2024-11-05 11:38:53.039990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:53.802 [2024-11-05 11:38:53.065207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:53.802 [2024-11-05 11:38:53.065258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:53.802 [2024-11-05 11:38:53.065269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.174 ms 00:22:53.802 [2024-11-05 11:38:53.065277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.063 [2024-11-05 11:38:53.090408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.063 [2024-11-05 11:38:53.090459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:54.063 [2024-11-05 11:38:53.090471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.045 ms 00:22:54.063 [2024-11-05 11:38:53.090479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.063 [2024-11-05 11:38:53.090521] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:54.063 [2024-11-05 11:38:53.090537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:22:54.063 [2024-11-05 11:38:53.090548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:54.063 [2024-11-05 11:38:53.090711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.090999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:54.064 [2024-11-05 11:38:53.091389] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:54.064 [2024-11-05 11:38:53.091397] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 7108cdfb-1a51-4b6f-821b-c2680d6e4cf0 00:22:54.064 [2024-11-05 11:38:53.091405] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:22:54.064 [2024-11-05 11:38:53.091413] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 18112 00:22:54.064 [2024-11-05 11:38:53.091420] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 17152 00:22:54.064 [2024-11-05 11:38:53.091430] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0560 00:22:54.064 [2024-11-05 11:38:53.091437] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:54.064 [2024-11-05 11:38:53.091445] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:54.064 [2024-11-05 11:38:53.091458] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:54.064 [2024-11-05 11:38:53.091471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:54.064 [2024-11-05 11:38:53.091478] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:54.064 [2024-11-05 11:38:53.091486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.064 [2024-11-05 11:38:53.091495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:54.064 [2024-11-05 11:38:53.091504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.966 ms 00:22:54.064 [2024-11-05 11:38:53.091512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.104959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.065 [2024-11-05 11:38:53.105006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:54.065 [2024-11-05 11:38:53.105018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.412 ms 00:22:54.065 [2024-11-05 11:38:53.105026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.105414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:54.065 [2024-11-05 11:38:53.105445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:54.065 [2024-11-05 11:38:53.105455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:22:54.065 [2024-11-05 11:38:53.105463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.141856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.141906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:54.065 [2024-11-05 11:38:53.141923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.141934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.142003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.142013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:54.065 [2024-11-05 11:38:53.142023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.142033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.142100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.142112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:54.065 [2024-11-05 11:38:53.142121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.142134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.142150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.142160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:54.065 [2024-11-05 11:38:53.142168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.142177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.225656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.225716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:54.065 [2024-11-05 11:38:53.225730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.225745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:54.065 [2024-11-05 11:38:53.295110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:54.065 [2024-11-05 11:38:53.295213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:54.065 [2024-11-05 11:38:53.295288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:54.065 [2024-11-05 11:38:53.295415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:54.065 [2024-11-05 11:38:53.295478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:54.065 [2024-11-05 11:38:53.295547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:54.065 [2024-11-05 11:38:53.295619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:54.065 [2024-11-05 11:38:53.295628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:54.065 [2024-11-05 11:38:53.295636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:54.065 [2024-11-05 11:38:53.295766] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 718.770 ms, result 0 00:22:55.008 00:22:55.008 00:22:55.008 11:38:54 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:57.049 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:22:57.049 11:38:56 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:22:57.049 11:38:56 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:22:57.049 11:38:56 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74328 00:22:57.309 11:38:56 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 74328 ']' 00:22:57.309 11:38:56 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 74328 00:22:57.309 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (74328) - No such process 00:22:57.309 Process with pid 74328 is not found 00:22:57.309 11:38:56 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 74328 is not found' 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:22:57.309 Remove shared memory files 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:57.309 11:38:56 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:22:57.309 00:22:57.309 real 4m21.955s 00:22:57.309 user 4m10.121s 00:22:57.309 sys 0m11.922s 00:22:57.309 ************************************ 00:22:57.309 END TEST ftl_restore 00:22:57.309 ************************************ 00:22:57.309 11:38:56 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:57.309 11:38:56 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:22:57.309 11:38:56 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:57.309 11:38:56 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:22:57.309 11:38:56 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:57.309 11:38:56 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:57.309 ************************************ 00:22:57.309 START TEST ftl_dirty_shutdown 00:22:57.309 ************************************ 00:22:57.309 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:22:57.309 * Looking for test storage... 00:22:57.309 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.309 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:57.309 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:22:57.309 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.571 --rc genhtml_branch_coverage=1 00:22:57.571 --rc genhtml_function_coverage=1 00:22:57.571 --rc genhtml_legend=1 00:22:57.571 --rc geninfo_all_blocks=1 00:22:57.571 --rc geninfo_unexecuted_blocks=1 00:22:57.571 00:22:57.571 ' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.571 --rc genhtml_branch_coverage=1 00:22:57.571 --rc genhtml_function_coverage=1 00:22:57.571 --rc genhtml_legend=1 00:22:57.571 --rc geninfo_all_blocks=1 00:22:57.571 --rc geninfo_unexecuted_blocks=1 00:22:57.571 00:22:57.571 ' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.571 --rc genhtml_branch_coverage=1 00:22:57.571 --rc genhtml_function_coverage=1 00:22:57.571 --rc genhtml_legend=1 00:22:57.571 --rc geninfo_all_blocks=1 00:22:57.571 --rc geninfo_unexecuted_blocks=1 00:22:57.571 00:22:57.571 ' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:57.571 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:57.571 --rc genhtml_branch_coverage=1 00:22:57.571 --rc genhtml_function_coverage=1 00:22:57.571 --rc genhtml_legend=1 00:22:57.571 --rc geninfo_all_blocks=1 00:22:57.571 --rc geninfo_unexecuted_blocks=1 00:22:57.571 00:22:57.571 ' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:57.571 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=77146 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 77146 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 77146 ']' 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:57.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:57.572 11:38:56 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:22:57.572 [2024-11-05 11:38:56.711398] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:22:57.572 [2024-11-05 11:38:56.711552] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77146 ] 00:22:57.831 [2024-11-05 11:38:56.872115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.831 [2024-11-05 11:38:56.989924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.403 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:58.403 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:58.663 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:22:58.924 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:22:58.924 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:58.925 11:38:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:22:58.925 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:22:58.925 { 00:22:58.925 "name": "nvme0n1", 00:22:58.925 "aliases": [ 00:22:58.925 "0734a3b9-f726-49a8-bd9b-77a10e914296" 00:22:58.925 ], 00:22:58.925 "product_name": "NVMe disk", 00:22:58.925 "block_size": 4096, 00:22:58.925 "num_blocks": 1310720, 00:22:58.925 "uuid": "0734a3b9-f726-49a8-bd9b-77a10e914296", 00:22:58.925 "numa_id": -1, 00:22:58.925 "assigned_rate_limits": { 00:22:58.925 "rw_ios_per_sec": 0, 00:22:58.925 "rw_mbytes_per_sec": 0, 00:22:58.925 "r_mbytes_per_sec": 0, 00:22:58.925 "w_mbytes_per_sec": 0 00:22:58.925 }, 00:22:58.925 "claimed": true, 00:22:58.925 "claim_type": "read_many_write_one", 00:22:58.925 "zoned": false, 00:22:58.925 "supported_io_types": { 00:22:58.925 "read": true, 00:22:58.925 "write": true, 00:22:58.925 "unmap": true, 00:22:58.925 "flush": true, 00:22:58.925 "reset": true, 00:22:58.925 "nvme_admin": true, 00:22:58.925 "nvme_io": true, 00:22:58.925 "nvme_io_md": false, 00:22:58.925 "write_zeroes": true, 00:22:58.925 "zcopy": false, 00:22:58.925 "get_zone_info": false, 00:22:58.925 "zone_management": false, 00:22:58.925 "zone_append": false, 00:22:58.925 "compare": true, 00:22:58.925 "compare_and_write": false, 00:22:58.925 "abort": true, 00:22:58.925 "seek_hole": false, 00:22:58.925 "seek_data": false, 00:22:58.925 "copy": true, 00:22:58.925 "nvme_iov_md": false 00:22:58.925 }, 00:22:58.925 "driver_specific": { 00:22:58.925 "nvme": [ 00:22:58.925 { 00:22:58.925 "pci_address": "0000:00:11.0", 00:22:58.925 "trid": { 00:22:58.925 "trtype": "PCIe", 00:22:58.925 "traddr": "0000:00:11.0" 00:22:58.925 }, 00:22:58.925 "ctrlr_data": { 00:22:58.925 "cntlid": 0, 00:22:58.925 "vendor_id": "0x1b36", 00:22:58.925 "model_number": "QEMU NVMe Ctrl", 00:22:58.925 "serial_number": "12341", 00:22:58.925 "firmware_revision": "8.0.0", 00:22:58.925 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:58.925 "oacs": { 00:22:58.925 "security": 0, 00:22:58.925 "format": 1, 00:22:58.925 "firmware": 0, 00:22:58.925 "ns_manage": 1 00:22:58.925 }, 00:22:58.925 "multi_ctrlr": false, 00:22:58.925 "ana_reporting": false 00:22:58.925 }, 00:22:58.925 "vs": { 00:22:58.925 "nvme_version": "1.4" 00:22:58.925 }, 00:22:58.925 "ns_data": { 00:22:58.925 "id": 1, 00:22:58.925 "can_share": false 00:22:58.925 } 00:22:58.925 } 00:22:58.925 ], 00:22:58.925 "mp_policy": "active_passive" 00:22:58.925 } 00:22:58.925 } 00:22:58.925 ]' 00:22:58.925 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:22:59.186 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:59.187 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:22:59.187 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:59.187 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:59.187 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:59.446 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a91abc81-b126-4a88-8d9f-e09ab1598e55 00:22:59.446 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:59.446 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a91abc81-b126-4a88-8d9f-e09ab1598e55 00:22:59.446 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:22:59.706 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=05e9054d-1a83-4474-b955-7fd811825a79 00:22:59.706 11:38:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 05e9054d-1a83-4474-b955-7fd811825a79 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=3d135058-dade-4e7f-a58b-3b365a3a2832 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 3d135058-dade-4e7f-a58b-3b365a3a2832 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=3d135058-dade-4e7f-a58b-3b365a3a2832 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3d135058-dade-4e7f-a58b-3b365a3a2832 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=3d135058-dade-4e7f-a58b-3b365a3a2832 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:22:59.967 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:00.227 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:00.227 { 00:23:00.227 "name": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:00.227 "aliases": [ 00:23:00.227 "lvs/nvme0n1p0" 00:23:00.227 ], 00:23:00.227 "product_name": "Logical Volume", 00:23:00.227 "block_size": 4096, 00:23:00.227 "num_blocks": 26476544, 00:23:00.227 "uuid": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:00.227 "assigned_rate_limits": { 00:23:00.227 "rw_ios_per_sec": 0, 00:23:00.227 "rw_mbytes_per_sec": 0, 00:23:00.227 "r_mbytes_per_sec": 0, 00:23:00.227 "w_mbytes_per_sec": 0 00:23:00.227 }, 00:23:00.227 "claimed": false, 00:23:00.227 "zoned": false, 00:23:00.227 "supported_io_types": { 00:23:00.227 "read": true, 00:23:00.227 "write": true, 00:23:00.227 "unmap": true, 00:23:00.227 "flush": false, 00:23:00.227 "reset": true, 00:23:00.227 "nvme_admin": false, 00:23:00.227 "nvme_io": false, 00:23:00.227 "nvme_io_md": false, 00:23:00.227 "write_zeroes": true, 00:23:00.227 "zcopy": false, 00:23:00.227 "get_zone_info": false, 00:23:00.227 "zone_management": false, 00:23:00.227 "zone_append": false, 00:23:00.227 "compare": false, 00:23:00.227 "compare_and_write": false, 00:23:00.227 "abort": false, 00:23:00.227 "seek_hole": true, 00:23:00.227 "seek_data": true, 00:23:00.227 "copy": false, 00:23:00.227 "nvme_iov_md": false 00:23:00.227 }, 00:23:00.227 "driver_specific": { 00:23:00.227 "lvol": { 00:23:00.227 "lvol_store_uuid": "05e9054d-1a83-4474-b955-7fd811825a79", 00:23:00.228 "base_bdev": "nvme0n1", 00:23:00.228 "thin_provision": true, 00:23:00.228 "num_allocated_clusters": 0, 00:23:00.228 "snapshot": false, 00:23:00.228 "clone": false, 00:23:00.228 "esnap_clone": false 00:23:00.228 } 00:23:00.228 } 00:23:00.228 } 00:23:00.228 ]' 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:00.228 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:00.486 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:00.745 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:00.745 { 00:23:00.745 "name": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:00.745 "aliases": [ 00:23:00.745 "lvs/nvme0n1p0" 00:23:00.745 ], 00:23:00.745 "product_name": "Logical Volume", 00:23:00.745 "block_size": 4096, 00:23:00.745 "num_blocks": 26476544, 00:23:00.745 "uuid": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:00.745 "assigned_rate_limits": { 00:23:00.745 "rw_ios_per_sec": 0, 00:23:00.745 "rw_mbytes_per_sec": 0, 00:23:00.745 "r_mbytes_per_sec": 0, 00:23:00.745 "w_mbytes_per_sec": 0 00:23:00.745 }, 00:23:00.745 "claimed": false, 00:23:00.745 "zoned": false, 00:23:00.745 "supported_io_types": { 00:23:00.745 "read": true, 00:23:00.745 "write": true, 00:23:00.745 "unmap": true, 00:23:00.745 "flush": false, 00:23:00.745 "reset": true, 00:23:00.745 "nvme_admin": false, 00:23:00.745 "nvme_io": false, 00:23:00.745 "nvme_io_md": false, 00:23:00.745 "write_zeroes": true, 00:23:00.745 "zcopy": false, 00:23:00.745 "get_zone_info": false, 00:23:00.745 "zone_management": false, 00:23:00.745 "zone_append": false, 00:23:00.745 "compare": false, 00:23:00.745 "compare_and_write": false, 00:23:00.745 "abort": false, 00:23:00.745 "seek_hole": true, 00:23:00.745 "seek_data": true, 00:23:00.745 "copy": false, 00:23:00.745 "nvme_iov_md": false 00:23:00.745 }, 00:23:00.745 "driver_specific": { 00:23:00.745 "lvol": { 00:23:00.745 "lvol_store_uuid": "05e9054d-1a83-4474-b955-7fd811825a79", 00:23:00.745 "base_bdev": "nvme0n1", 00:23:00.745 "thin_provision": true, 00:23:00.746 "num_allocated_clusters": 0, 00:23:00.746 "snapshot": false, 00:23:00.746 "clone": false, 00:23:00.746 "esnap_clone": false 00:23:00.746 } 00:23:00.746 } 00:23:00.746 } 00:23:00.746 ]' 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:00.746 11:38:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:01.005 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3d135058-dade-4e7f-a58b-3b365a3a2832 00:23:01.264 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:01.264 { 00:23:01.264 "name": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:01.264 "aliases": [ 00:23:01.264 "lvs/nvme0n1p0" 00:23:01.264 ], 00:23:01.264 "product_name": "Logical Volume", 00:23:01.264 "block_size": 4096, 00:23:01.264 "num_blocks": 26476544, 00:23:01.264 "uuid": "3d135058-dade-4e7f-a58b-3b365a3a2832", 00:23:01.264 "assigned_rate_limits": { 00:23:01.264 "rw_ios_per_sec": 0, 00:23:01.264 "rw_mbytes_per_sec": 0, 00:23:01.264 "r_mbytes_per_sec": 0, 00:23:01.264 "w_mbytes_per_sec": 0 00:23:01.264 }, 00:23:01.264 "claimed": false, 00:23:01.264 "zoned": false, 00:23:01.264 "supported_io_types": { 00:23:01.264 "read": true, 00:23:01.264 "write": true, 00:23:01.264 "unmap": true, 00:23:01.264 "flush": false, 00:23:01.265 "reset": true, 00:23:01.265 "nvme_admin": false, 00:23:01.265 "nvme_io": false, 00:23:01.265 "nvme_io_md": false, 00:23:01.265 "write_zeroes": true, 00:23:01.265 "zcopy": false, 00:23:01.265 "get_zone_info": false, 00:23:01.265 "zone_management": false, 00:23:01.265 "zone_append": false, 00:23:01.265 "compare": false, 00:23:01.265 "compare_and_write": false, 00:23:01.265 "abort": false, 00:23:01.265 "seek_hole": true, 00:23:01.265 "seek_data": true, 00:23:01.265 "copy": false, 00:23:01.265 "nvme_iov_md": false 00:23:01.265 }, 00:23:01.265 "driver_specific": { 00:23:01.265 "lvol": { 00:23:01.265 "lvol_store_uuid": "05e9054d-1a83-4474-b955-7fd811825a79", 00:23:01.265 "base_bdev": "nvme0n1", 00:23:01.265 "thin_provision": true, 00:23:01.265 "num_allocated_clusters": 0, 00:23:01.265 "snapshot": false, 00:23:01.265 "clone": false, 00:23:01.265 "esnap_clone": false 00:23:01.265 } 00:23:01.265 } 00:23:01.265 } 00:23:01.265 ]' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 3d135058-dade-4e7f-a58b-3b365a3a2832 --l2p_dram_limit 10' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:01.265 11:39:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 3d135058-dade-4e7f-a58b-3b365a3a2832 --l2p_dram_limit 10 -c nvc0n1p0 00:23:01.526 [2024-11-05 11:39:00.637060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.637196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:01.526 [2024-11-05 11:39:00.637216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:01.526 [2024-11-05 11:39:00.637224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.637277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.637284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:01.526 [2024-11-05 11:39:00.637292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:01.526 [2024-11-05 11:39:00.637298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.637318] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:01.526 [2024-11-05 11:39:00.637926] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:01.526 [2024-11-05 11:39:00.637941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.637947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:01.526 [2024-11-05 11:39:00.637957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.629 ms 00:23:01.526 [2024-11-05 11:39:00.637963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.638014] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID bf75b195-0fb7-4624-951a-ddedc5463da0 00:23:01.526 [2024-11-05 11:39:00.638969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.638991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:01.526 [2024-11-05 11:39:00.638999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:23:01.526 [2024-11-05 11:39:00.639006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.643707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.643736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:01.526 [2024-11-05 11:39:00.643744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.669 ms 00:23:01.526 [2024-11-05 11:39:00.643753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.643829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.643838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:01.526 [2024-11-05 11:39:00.643860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:01.526 [2024-11-05 11:39:00.643871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.643906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.526 [2024-11-05 11:39:00.643916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:01.526 [2024-11-05 11:39:00.643922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:01.526 [2024-11-05 11:39:00.643929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.526 [2024-11-05 11:39:00.643948] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:01.527 [2024-11-05 11:39:00.646785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.527 [2024-11-05 11:39:00.646827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:01.527 [2024-11-05 11:39:00.646836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.842 ms 00:23:01.527 [2024-11-05 11:39:00.646845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.527 [2024-11-05 11:39:00.646880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.527 [2024-11-05 11:39:00.646887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:01.527 [2024-11-05 11:39:00.646895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:01.527 [2024-11-05 11:39:00.646900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.527 [2024-11-05 11:39:00.646919] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:01.527 [2024-11-05 11:39:00.647022] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:01.527 [2024-11-05 11:39:00.647034] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:01.527 [2024-11-05 11:39:00.647043] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:01.527 [2024-11-05 11:39:00.647052] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647058] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647066] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:01.527 [2024-11-05 11:39:00.647071] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:01.527 [2024-11-05 11:39:00.647078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:01.527 [2024-11-05 11:39:00.647084] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:01.527 [2024-11-05 11:39:00.647092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.527 [2024-11-05 11:39:00.647097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:01.527 [2024-11-05 11:39:00.647105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.174 ms 00:23:01.527 [2024-11-05 11:39:00.647115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.527 [2024-11-05 11:39:00.647180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.527 [2024-11-05 11:39:00.647186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:01.527 [2024-11-05 11:39:00.647193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:23:01.527 [2024-11-05 11:39:00.647199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.527 [2024-11-05 11:39:00.647272] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:01.527 [2024-11-05 11:39:00.647281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:01.527 [2024-11-05 11:39:00.647288] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647294] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:01.527 [2024-11-05 11:39:00.647306] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647313] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647318] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:01.527 [2024-11-05 11:39:00.647324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.527 [2024-11-05 11:39:00.647336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:01.527 [2024-11-05 11:39:00.647340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:01.527 [2024-11-05 11:39:00.647347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:01.527 [2024-11-05 11:39:00.647352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:01.527 [2024-11-05 11:39:00.647359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:01.527 [2024-11-05 11:39:00.647364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647372] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:01.527 [2024-11-05 11:39:00.647378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647385] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:01.527 [2024-11-05 11:39:00.647397] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647408] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:01.527 [2024-11-05 11:39:00.647413] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647419] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:01.527 [2024-11-05 11:39:00.647430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:01.527 [2024-11-05 11:39:00.647446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:01.527 [2024-11-05 11:39:00.647465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.527 [2024-11-05 11:39:00.647476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:01.527 [2024-11-05 11:39:00.647481] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:01.527 [2024-11-05 11:39:00.647487] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:01.527 [2024-11-05 11:39:00.647493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:01.527 [2024-11-05 11:39:00.647499] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:01.527 [2024-11-05 11:39:00.647504] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647511] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:01.527 [2024-11-05 11:39:00.647515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:01.527 [2024-11-05 11:39:00.647521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647526] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:01.527 [2024-11-05 11:39:00.647533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:01.527 [2024-11-05 11:39:00.647538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:01.527 [2024-11-05 11:39:00.647550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:01.527 [2024-11-05 11:39:00.647559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:01.527 [2024-11-05 11:39:00.647565] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:01.527 [2024-11-05 11:39:00.647571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:01.527 [2024-11-05 11:39:00.647576] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:01.527 [2024-11-05 11:39:00.647583] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:01.527 [2024-11-05 11:39:00.647590] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:01.527 [2024-11-05 11:39:00.647598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.527 [2024-11-05 11:39:00.647604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:01.527 [2024-11-05 11:39:00.647611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:01.527 [2024-11-05 11:39:00.647617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:01.527 [2024-11-05 11:39:00.647624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:01.527 [2024-11-05 11:39:00.647630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:01.527 [2024-11-05 11:39:00.647636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:01.527 [2024-11-05 11:39:00.647642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:01.527 [2024-11-05 11:39:00.647648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:01.527 [2024-11-05 11:39:00.647654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:01.528 [2024-11-05 11:39:00.647662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:01.528 [2024-11-05 11:39:00.647690] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:01.528 [2024-11-05 11:39:00.647698] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.528 [2024-11-05 11:39:00.647712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:01.528 [2024-11-05 11:39:00.647718] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:01.528 [2024-11-05 11:39:00.647725] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:01.528 [2024-11-05 11:39:00.647730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:01.528 [2024-11-05 11:39:00.647737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:01.528 [2024-11-05 11:39:00.647742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:23:01.528 [2024-11-05 11:39:00.647749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:01.528 [2024-11-05 11:39:00.647777] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:01.528 [2024-11-05 11:39:00.647787] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:05.750 [2024-11-05 11:39:04.380869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.381133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:05.750 [2024-11-05 11:39:04.381162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3733.075 ms 00:23:05.750 [2024-11-05 11:39:04.381175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.413676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.413744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:05.750 [2024-11-05 11:39:04.413758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.251 ms 00:23:05.750 [2024-11-05 11:39:04.413768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.413943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.413959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:05.750 [2024-11-05 11:39:04.413969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:05.750 [2024-11-05 11:39:04.413983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.449405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.449463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.750 [2024-11-05 11:39:04.449476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.386 ms 00:23:05.750 [2024-11-05 11:39:04.449486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.449527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.449539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.750 [2024-11-05 11:39:04.449548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:05.750 [2024-11-05 11:39:04.449561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.450216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.450243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.750 [2024-11-05 11:39:04.450255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:23:05.750 [2024-11-05 11:39:04.450265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.450384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.450396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.750 [2024-11-05 11:39:04.450405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:23:05.750 [2024-11-05 11:39:04.450417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.467922] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.468127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.750 [2024-11-05 11:39:04.468147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.480 ms 00:23:05.750 [2024-11-05 11:39:04.468161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.750 [2024-11-05 11:39:04.481248] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:05.750 [2024-11-05 11:39:04.485112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.750 [2024-11-05 11:39:04.485159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:05.751 [2024-11-05 11:39:04.485173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.852 ms 00:23:05.751 [2024-11-05 11:39:04.485182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.615156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.615228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:05.751 [2024-11-05 11:39:04.615249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 129.936 ms 00:23:05.751 [2024-11-05 11:39:04.615259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.615474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.615487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:05.751 [2024-11-05 11:39:04.615503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:23:05.751 [2024-11-05 11:39:04.615515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.642329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.642384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:05.751 [2024-11-05 11:39:04.642400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.757 ms 00:23:05.751 [2024-11-05 11:39:04.642409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.667749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.667798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:05.751 [2024-11-05 11:39:04.667833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.283 ms 00:23:05.751 [2024-11-05 11:39:04.667841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.668461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.668485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:05.751 [2024-11-05 11:39:04.668499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:23:05.751 [2024-11-05 11:39:04.668507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.758364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.758601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:05.751 [2024-11-05 11:39:04.758637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.808 ms 00:23:05.751 [2024-11-05 11:39:04.758647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.786660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.786710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:05.751 [2024-11-05 11:39:04.786730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.902 ms 00:23:05.751 [2024-11-05 11:39:04.786738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.813192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.813387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:05.751 [2024-11-05 11:39:04.813414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.396 ms 00:23:05.751 [2024-11-05 11:39:04.813422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.840160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.840341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:05.751 [2024-11-05 11:39:04.840369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.657 ms 00:23:05.751 [2024-11-05 11:39:04.840378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.840430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.840441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:05.751 [2024-11-05 11:39:04.840455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:05.751 [2024-11-05 11:39:04.840464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.840575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.751 [2024-11-05 11:39:04.840587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:05.751 [2024-11-05 11:39:04.840597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:23:05.751 [2024-11-05 11:39:04.840606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.751 [2024-11-05 11:39:04.841731] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4204.160 ms, result 0 00:23:05.751 { 00:23:05.751 "name": "ftl0", 00:23:05.751 "uuid": "bf75b195-0fb7-4624-951a-ddedc5463da0" 00:23:05.751 } 00:23:05.751 11:39:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:05.751 11:39:04 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:06.012 11:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:06.012 11:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:06.012 11:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:06.273 /dev/nbd0 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:06.273 1+0 records in 00:23:06.273 1+0 records out 00:23:06.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347913 s, 11.8 MB/s 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:06.273 11:39:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:06.273 [2024-11-05 11:39:05.406638] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:23:06.273 [2024-11-05 11:39:05.406775] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77289 ] 00:23:06.534 [2024-11-05 11:39:05.567510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.534 [2024-11-05 11:39:05.694836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:07.919  [2024-11-05T11:39:08.136Z] Copying: 190/1024 [MB] (190 MBps) [2024-11-05T11:39:09.107Z] Copying: 381/1024 [MB] (191 MBps) [2024-11-05T11:39:10.050Z] Copying: 606/1024 [MB] (225 MBps) [2024-11-05T11:39:10.621Z] Copying: 866/1024 [MB] (259 MBps) [2024-11-05T11:39:11.192Z] Copying: 1024/1024 [MB] (average 221 MBps) 00:23:11.918 00:23:11.918 11:39:11 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:13.831 11:39:13 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:13.831 [2024-11-05 11:39:13.072635] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:23:13.831 [2024-11-05 11:39:13.072748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77376 ] 00:23:14.092 [2024-11-05 11:39:13.228166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.092 [2024-11-05 11:39:13.304512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.478  [2024-11-05T11:39:15.695Z] Copying: 29/1024 [MB] (29 MBps) [2024-11-05T11:39:16.639Z] Copying: 59/1024 [MB] (29 MBps) [2024-11-05T11:39:17.581Z] Copying: 91/1024 [MB] (31 MBps) [2024-11-05T11:39:18.519Z] Copying: 116/1024 [MB] (25 MBps) [2024-11-05T11:39:19.903Z] Copying: 152/1024 [MB] (35 MBps) [2024-11-05T11:39:20.473Z] Copying: 186/1024 [MB] (33 MBps) [2024-11-05T11:39:21.858Z] Copying: 216/1024 [MB] (29 MBps) [2024-11-05T11:39:22.799Z] Copying: 251/1024 [MB] (35 MBps) [2024-11-05T11:39:23.737Z] Copying: 286/1024 [MB] (34 MBps) [2024-11-05T11:39:24.678Z] Copying: 321/1024 [MB] (35 MBps) [2024-11-05T11:39:25.647Z] Copying: 355/1024 [MB] (33 MBps) [2024-11-05T11:39:26.589Z] Copying: 392/1024 [MB] (36 MBps) [2024-11-05T11:39:27.531Z] Copying: 428/1024 [MB] (35 MBps) [2024-11-05T11:39:28.474Z] Copying: 464/1024 [MB] (36 MBps) [2024-11-05T11:39:29.856Z] Copying: 501/1024 [MB] (36 MBps) [2024-11-05T11:39:30.795Z] Copying: 537/1024 [MB] (36 MBps) [2024-11-05T11:39:31.737Z] Copying: 573/1024 [MB] (36 MBps) [2024-11-05T11:39:32.679Z] Copying: 610/1024 [MB] (36 MBps) [2024-11-05T11:39:33.630Z] Copying: 642/1024 [MB] (32 MBps) [2024-11-05T11:39:34.581Z] Copying: 676/1024 [MB] (33 MBps) [2024-11-05T11:39:35.524Z] Copying: 709/1024 [MB] (32 MBps) [2024-11-05T11:39:36.908Z] Copying: 745/1024 [MB] (36 MBps) [2024-11-05T11:39:37.480Z] Copying: 778/1024 [MB] (33 MBps) [2024-11-05T11:39:38.865Z] Copying: 809/1024 [MB] (30 MBps) [2024-11-05T11:39:39.807Z] Copying: 836/1024 [MB] (26 MBps) [2024-11-05T11:39:40.754Z] Copying: 866/1024 [MB] (30 MBps) [2024-11-05T11:39:41.695Z] Copying: 898/1024 [MB] (31 MBps) [2024-11-05T11:39:42.651Z] Copying: 930/1024 [MB] (32 MBps) [2024-11-05T11:39:43.607Z] Copying: 960/1024 [MB] (30 MBps) [2024-11-05T11:39:44.550Z] Copying: 991/1024 [MB] (31 MBps) [2024-11-05T11:39:44.550Z] Copying: 1022/1024 [MB] (30 MBps) [2024-11-05T11:39:45.122Z] Copying: 1024/1024 [MB] (average 32 MBps) 00:23:45.848 00:23:45.848 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:23:45.848 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:23:46.108 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:46.371 [2024-11-05 11:39:45.449519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.449562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:46.371 [2024-11-05 11:39:45.449573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:46.371 [2024-11-05 11:39:45.449581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.449599] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:46.371 [2024-11-05 11:39:45.451755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.451889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:46.371 [2024-11-05 11:39:45.451906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.140 ms 00:23:46.371 [2024-11-05 11:39:45.451913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.453681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.453704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:46.371 [2024-11-05 11:39:45.453713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.742 ms 00:23:46.371 [2024-11-05 11:39:45.453719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.466248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.466275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:46.371 [2024-11-05 11:39:45.466285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.512 ms 00:23:46.371 [2024-11-05 11:39:45.466293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.471166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.471267] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:46.371 [2024-11-05 11:39:45.471283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.845 ms 00:23:46.371 [2024-11-05 11:39:45.471289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.489124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.489152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:46.371 [2024-11-05 11:39:45.489162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.745 ms 00:23:46.371 [2024-11-05 11:39:45.489168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.501362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.501390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:46.371 [2024-11-05 11:39:45.501400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.162 ms 00:23:46.371 [2024-11-05 11:39:45.501407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.501512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.501520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:46.371 [2024-11-05 11:39:45.501528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:23:46.371 [2024-11-05 11:39:45.501534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.519611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.519636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:46.371 [2024-11-05 11:39:45.519646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.062 ms 00:23:46.371 [2024-11-05 11:39:45.519652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.537152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.537245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:46.371 [2024-11-05 11:39:45.537260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.470 ms 00:23:46.371 [2024-11-05 11:39:45.537266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.554657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.554683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:46.371 [2024-11-05 11:39:45.554691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.363 ms 00:23:46.371 [2024-11-05 11:39:45.554697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.572081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.371 [2024-11-05 11:39:45.572178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:46.371 [2024-11-05 11:39:45.572193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.327 ms 00:23:46.371 [2024-11-05 11:39:45.572199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.371 [2024-11-05 11:39:45.572225] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:46.372 [2024-11-05 11:39:45.572236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:46.372 [2024-11-05 11:39:45.572799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:46.373 [2024-11-05 11:39:45.572915] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:46.373 [2024-11-05 11:39:45.572923] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf75b195-0fb7-4624-951a-ddedc5463da0 00:23:46.373 [2024-11-05 11:39:45.572929] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:46.373 [2024-11-05 11:39:45.572937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:46.373 [2024-11-05 11:39:45.572943] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:46.373 [2024-11-05 11:39:45.572950] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:46.373 [2024-11-05 11:39:45.572955] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:46.373 [2024-11-05 11:39:45.572963] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:46.373 [2024-11-05 11:39:45.572969] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:46.373 [2024-11-05 11:39:45.572975] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:46.373 [2024-11-05 11:39:45.572979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:46.373 [2024-11-05 11:39:45.572993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.373 [2024-11-05 11:39:45.572999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:46.373 [2024-11-05 11:39:45.573007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.763 ms 00:23:46.373 [2024-11-05 11:39:45.573013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.582677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.373 [2024-11-05 11:39:45.582700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:46.373 [2024-11-05 11:39:45.582710] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.612 ms 00:23:46.373 [2024-11-05 11:39:45.582717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.583015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:46.373 [2024-11-05 11:39:45.583027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:46.373 [2024-11-05 11:39:45.583036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.279 ms 00:23:46.373 [2024-11-05 11:39:45.583041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.616285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.373 [2024-11-05 11:39:45.616310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:46.373 [2024-11-05 11:39:45.616321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.373 [2024-11-05 11:39:45.616330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.616373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.373 [2024-11-05 11:39:45.616380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:46.373 [2024-11-05 11:39:45.616387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.373 [2024-11-05 11:39:45.616393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.616448] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.373 [2024-11-05 11:39:45.616456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:46.373 [2024-11-05 11:39:45.616463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.373 [2024-11-05 11:39:45.616468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.373 [2024-11-05 11:39:45.616486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.373 [2024-11-05 11:39:45.616492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:46.373 [2024-11-05 11:39:45.616498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.373 [2024-11-05 11:39:45.616504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.675658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.675689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:46.635 [2024-11-05 11:39:45.675699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.675706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:46.635 [2024-11-05 11:39:45.724405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:46.635 [2024-11-05 11:39:45.724510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:46.635 [2024-11-05 11:39:45.724569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:46.635 [2024-11-05 11:39:45.724657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:46.635 [2024-11-05 11:39:45.724704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:46.635 [2024-11-05 11:39:45.724753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:46.635 [2024-11-05 11:39:45.724826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:46.635 [2024-11-05 11:39:45.724834] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:46.635 [2024-11-05 11:39:45.724839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:46.635 [2024-11-05 11:39:45.724941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 275.396 ms, result 0 00:23:46.635 true 00:23:46.635 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 77146 00:23:46.635 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid77146 00:23:46.635 11:39:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:23:46.635 [2024-11-05 11:39:45.816599] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:23:46.635 [2024-11-05 11:39:45.816717] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77722 ] 00:23:46.896 [2024-11-05 11:39:45.971736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:46.896 [2024-11-05 11:39:46.046233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.283  [2024-11-05T11:39:48.498Z] Copying: 260/1024 [MB] (260 MBps) [2024-11-05T11:39:49.438Z] Copying: 522/1024 [MB] (261 MBps) [2024-11-05T11:39:50.381Z] Copying: 781/1024 [MB] (259 MBps) [2024-11-05T11:39:50.980Z] Copying: 1024/1024 [MB] (average 259 MBps) 00:23:51.706 00:23:51.706 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 77146 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:23:51.706 11:39:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:51.706 [2024-11-05 11:39:50.785586] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:23:51.707 [2024-11-05 11:39:50.785673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77775 ] 00:23:51.707 [2024-11-05 11:39:50.935053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.969 [2024-11-05 11:39:51.011201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.969 [2024-11-05 11:39:51.217096] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:51.969 [2024-11-05 11:39:51.217147] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:52.231 [2024-11-05 11:39:51.279563] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:23:52.231 [2024-11-05 11:39:51.279875] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:23:52.231 [2024-11-05 11:39:51.280079] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:23:52.231 [2024-11-05 11:39:51.453944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.231 [2024-11-05 11:39:51.453986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:52.231 [2024-11-05 11:39:51.453999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:52.231 [2024-11-05 11:39:51.454007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.231 [2024-11-05 11:39:51.454056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.454066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:52.232 [2024-11-05 11:39:51.454074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:52.232 [2024-11-05 11:39:51.454081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.454097] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:52.232 [2024-11-05 11:39:51.454757] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:52.232 [2024-11-05 11:39:51.454772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.454780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:52.232 [2024-11-05 11:39:51.454788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:23:52.232 [2024-11-05 11:39:51.454795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.455924] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:52.232 [2024-11-05 11:39:51.468502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.468538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:52.232 [2024-11-05 11:39:51.468549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.580 ms 00:23:52.232 [2024-11-05 11:39:51.468557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.468606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.468615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:52.232 [2024-11-05 11:39:51.468623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:23:52.232 [2024-11-05 11:39:51.468635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.473600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.473628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:52.232 [2024-11-05 11:39:51.473642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.917 ms 00:23:52.232 [2024-11-05 11:39:51.473649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.473714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.473723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:52.232 [2024-11-05 11:39:51.473730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:23:52.232 [2024-11-05 11:39:51.473737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.473783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.473796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:52.232 [2024-11-05 11:39:51.473821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:52.232 [2024-11-05 11:39:51.473829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.473849] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:52.232 [2024-11-05 11:39:51.477168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.477196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:52.232 [2024-11-05 11:39:51.477206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.324 ms 00:23:52.232 [2024-11-05 11:39:51.477213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.477240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.477248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:52.232 [2024-11-05 11:39:51.477256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:52.232 [2024-11-05 11:39:51.477263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.477281] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:52.232 [2024-11-05 11:39:51.477302] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:52.232 [2024-11-05 11:39:51.477336] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:52.232 [2024-11-05 11:39:51.477351] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:52.232 [2024-11-05 11:39:51.477453] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:52.232 [2024-11-05 11:39:51.477463] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:52.232 [2024-11-05 11:39:51.477474] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:52.232 [2024-11-05 11:39:51.477483] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477495] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477503] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:52.232 [2024-11-05 11:39:51.477510] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:52.232 [2024-11-05 11:39:51.477517] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:52.232 [2024-11-05 11:39:51.477524] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:52.232 [2024-11-05 11:39:51.477531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.477538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:52.232 [2024-11-05 11:39:51.477546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:23:52.232 [2024-11-05 11:39:51.477553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.477634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.232 [2024-11-05 11:39:51.477642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:52.232 [2024-11-05 11:39:51.477651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:23:52.232 [2024-11-05 11:39:51.477658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.232 [2024-11-05 11:39:51.477768] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:52.232 [2024-11-05 11:39:51.477778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:52.232 [2024-11-05 11:39:51.477786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:52.232 [2024-11-05 11:39:51.477827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:52.232 [2024-11-05 11:39:51.477848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.232 [2024-11-05 11:39:51.477862] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:52.232 [2024-11-05 11:39:51.477874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:52.232 [2024-11-05 11:39:51.477881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:52.232 [2024-11-05 11:39:51.477887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:52.232 [2024-11-05 11:39:51.477895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:52.232 [2024-11-05 11:39:51.477902] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:52.232 [2024-11-05 11:39:51.477915] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:52.232 [2024-11-05 11:39:51.477935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477948] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:52.232 [2024-11-05 11:39:51.477954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:52.232 [2024-11-05 11:39:51.477973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477979] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.232 [2024-11-05 11:39:51.477986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:52.232 [2024-11-05 11:39:51.477992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:52.232 [2024-11-05 11:39:51.477999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:52.232 [2024-11-05 11:39:51.478005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:52.232 [2024-11-05 11:39:51.478011] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:52.232 [2024-11-05 11:39:51.478018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.232 [2024-11-05 11:39:51.478024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:52.232 [2024-11-05 11:39:51.478031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:52.232 [2024-11-05 11:39:51.478037] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:52.232 [2024-11-05 11:39:51.478043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:52.232 [2024-11-05 11:39:51.478049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:52.232 [2024-11-05 11:39:51.478056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.232 [2024-11-05 11:39:51.478062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:52.232 [2024-11-05 11:39:51.478070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:52.232 [2024-11-05 11:39:51.478078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.233 [2024-11-05 11:39:51.478084] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:52.233 [2024-11-05 11:39:51.478092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:52.233 [2024-11-05 11:39:51.478099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:52.233 [2024-11-05 11:39:51.478109] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:52.233 [2024-11-05 11:39:51.478116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:52.233 [2024-11-05 11:39:51.478123] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:52.233 [2024-11-05 11:39:51.478130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:52.233 [2024-11-05 11:39:51.478137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:52.233 [2024-11-05 11:39:51.478143] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:52.233 [2024-11-05 11:39:51.478150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:52.233 [2024-11-05 11:39:51.478157] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:52.233 [2024-11-05 11:39:51.478166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478174] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:52.233 [2024-11-05 11:39:51.478181] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:52.233 [2024-11-05 11:39:51.478188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:52.233 [2024-11-05 11:39:51.478195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:52.233 [2024-11-05 11:39:51.478202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:52.233 [2024-11-05 11:39:51.478209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:52.233 [2024-11-05 11:39:51.478215] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:52.233 [2024-11-05 11:39:51.478223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:52.233 [2024-11-05 11:39:51.478229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:52.233 [2024-11-05 11:39:51.478236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478263] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:52.233 [2024-11-05 11:39:51.478270] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:52.233 [2024-11-05 11:39:51.478278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478287] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:52.233 [2024-11-05 11:39:51.478294] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:52.233 [2024-11-05 11:39:51.478301] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:52.233 [2024-11-05 11:39:51.478308] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:52.233 [2024-11-05 11:39:51.478316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.233 [2024-11-05 11:39:51.478323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:52.233 [2024-11-05 11:39:51.478330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.616 ms 00:23:52.233 [2024-11-05 11:39:51.478338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.233 [2024-11-05 11:39:51.504424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.233 [2024-11-05 11:39:51.504568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:52.233 [2024-11-05 11:39:51.504585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.046 ms 00:23:52.233 [2024-11-05 11:39:51.504594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.233 [2024-11-05 11:39:51.504678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.233 [2024-11-05 11:39:51.504690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:52.233 [2024-11-05 11:39:51.504698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:23:52.233 [2024-11-05 11:39:51.504705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.543617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.543756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:52.495 [2024-11-05 11:39:51.543775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.862 ms 00:23:52.495 [2024-11-05 11:39:51.543787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.543841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.543851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:52.495 [2024-11-05 11:39:51.543860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:52.495 [2024-11-05 11:39:51.543867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.544232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.544258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:52.495 [2024-11-05 11:39:51.544267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:23:52.495 [2024-11-05 11:39:51.544274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.544401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.544410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:52.495 [2024-11-05 11:39:51.544419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:23:52.495 [2024-11-05 11:39:51.544426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.557568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.557601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:52.495 [2024-11-05 11:39:51.557611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.124 ms 00:23:52.495 [2024-11-05 11:39:51.557619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.570778] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:52.495 [2024-11-05 11:39:51.570827] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:52.495 [2024-11-05 11:39:51.570840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.570848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:52.495 [2024-11-05 11:39:51.570857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.115 ms 00:23:52.495 [2024-11-05 11:39:51.570880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.595270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.595307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:52.495 [2024-11-05 11:39:51.595325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.350 ms 00:23:52.495 [2024-11-05 11:39:51.595333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.607440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.607473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:52.495 [2024-11-05 11:39:51.607482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.069 ms 00:23:52.495 [2024-11-05 11:39:51.607489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.619360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.619392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:52.495 [2024-11-05 11:39:51.619403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.837 ms 00:23:52.495 [2024-11-05 11:39:51.619410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.620021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.620039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:52.495 [2024-11-05 11:39:51.620048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:23:52.495 [2024-11-05 11:39:51.620056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.676483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.676538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:52.495 [2024-11-05 11:39:51.676552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 56.409 ms 00:23:52.495 [2024-11-05 11:39:51.676561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.686981] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:52.495 [2024-11-05 11:39:51.689200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.689233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:52.495 [2024-11-05 11:39:51.689244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.596 ms 00:23:52.495 [2024-11-05 11:39:51.689251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.689334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.689347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:52.495 [2024-11-05 11:39:51.689356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:52.495 [2024-11-05 11:39:51.689364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.689440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.689450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:52.495 [2024-11-05 11:39:51.689458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:52.495 [2024-11-05 11:39:51.689466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.689485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.689493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:52.495 [2024-11-05 11:39:51.689503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:52.495 [2024-11-05 11:39:51.689511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.689540] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:52.495 [2024-11-05 11:39:51.689550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.689557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:52.495 [2024-11-05 11:39:51.689565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:23:52.495 [2024-11-05 11:39:51.689573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.713553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.713595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:52.495 [2024-11-05 11:39:51.713607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.960 ms 00:23:52.495 [2024-11-05 11:39:51.713614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.713692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:52.495 [2024-11-05 11:39:51.713702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:52.495 [2024-11-05 11:39:51.713711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:23:52.495 [2024-11-05 11:39:51.713719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:52.495 [2024-11-05 11:39:51.714682] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 260.317 ms, result 0 00:23:53.882  [2024-11-05T11:39:53.727Z] Copying: 13/1024 [MB] (13 MBps) [2024-11-05T11:39:55.114Z] Copying: 53/1024 [MB] (40 MBps) [2024-11-05T11:39:56.056Z] Copying: 99/1024 [MB] (45 MBps) [2024-11-05T11:39:57.000Z] Copying: 118/1024 [MB] (19 MBps) [2024-11-05T11:39:57.942Z] Copying: 136/1024 [MB] (17 MBps) [2024-11-05T11:39:58.885Z] Copying: 154/1024 [MB] (18 MBps) [2024-11-05T11:39:59.857Z] Copying: 168/1024 [MB] (13 MBps) [2024-11-05T11:40:00.795Z] Copying: 184/1024 [MB] (16 MBps) [2024-11-05T11:40:01.737Z] Copying: 204/1024 [MB] (20 MBps) [2024-11-05T11:40:03.129Z] Copying: 222/1024 [MB] (17 MBps) [2024-11-05T11:40:04.072Z] Copying: 237/1024 [MB] (14 MBps) [2024-11-05T11:40:05.014Z] Copying: 249/1024 [MB] (11 MBps) [2024-11-05T11:40:05.956Z] Copying: 259/1024 [MB] (10 MBps) [2024-11-05T11:40:06.901Z] Copying: 280/1024 [MB] (20 MBps) [2024-11-05T11:40:07.843Z] Copying: 298/1024 [MB] (17 MBps) [2024-11-05T11:40:08.813Z] Copying: 336/1024 [MB] (38 MBps) [2024-11-05T11:40:09.757Z] Copying: 358/1024 [MB] (22 MBps) [2024-11-05T11:40:11.140Z] Copying: 380/1024 [MB] (21 MBps) [2024-11-05T11:40:12.084Z] Copying: 396/1024 [MB] (15 MBps) [2024-11-05T11:40:13.026Z] Copying: 413/1024 [MB] (17 MBps) [2024-11-05T11:40:13.966Z] Copying: 428/1024 [MB] (14 MBps) [2024-11-05T11:40:14.905Z] Copying: 439/1024 [MB] (10 MBps) [2024-11-05T11:40:15.848Z] Copying: 450/1024 [MB] (11 MBps) [2024-11-05T11:40:16.791Z] Copying: 465/1024 [MB] (15 MBps) [2024-11-05T11:40:17.734Z] Copying: 479/1024 [MB] (14 MBps) [2024-11-05T11:40:19.120Z] Copying: 497/1024 [MB] (17 MBps) [2024-11-05T11:40:20.062Z] Copying: 509/1024 [MB] (11 MBps) [2024-11-05T11:40:21.005Z] Copying: 519/1024 [MB] (10 MBps) [2024-11-05T11:40:21.949Z] Copying: 534/1024 [MB] (14 MBps) [2024-11-05T11:40:22.892Z] Copying: 554/1024 [MB] (19 MBps) [2024-11-05T11:40:23.850Z] Copying: 573/1024 [MB] (18 MBps) [2024-11-05T11:40:24.789Z] Copying: 594/1024 [MB] (21 MBps) [2024-11-05T11:40:25.727Z] Copying: 611/1024 [MB] (16 MBps) [2024-11-05T11:40:27.112Z] Copying: 622/1024 [MB] (11 MBps) [2024-11-05T11:40:28.048Z] Copying: 635/1024 [MB] (12 MBps) [2024-11-05T11:40:28.981Z] Copying: 664/1024 [MB] (28 MBps) [2024-11-05T11:40:29.917Z] Copying: 684/1024 [MB] (20 MBps) [2024-11-05T11:40:30.853Z] Copying: 700/1024 [MB] (16 MBps) [2024-11-05T11:40:31.785Z] Copying: 711/1024 [MB] (10 MBps) [2024-11-05T11:40:33.158Z] Copying: 722/1024 [MB] (11 MBps) [2024-11-05T11:40:34.092Z] Copying: 744/1024 [MB] (21 MBps) [2024-11-05T11:40:35.048Z] Copying: 765/1024 [MB] (21 MBps) [2024-11-05T11:40:35.981Z] Copying: 789/1024 [MB] (23 MBps) [2024-11-05T11:40:36.916Z] Copying: 811/1024 [MB] (22 MBps) [2024-11-05T11:40:37.849Z] Copying: 827/1024 [MB] (15 MBps) [2024-11-05T11:40:38.790Z] Copying: 853/1024 [MB] (25 MBps) [2024-11-05T11:40:39.733Z] Copying: 871/1024 [MB] (18 MBps) [2024-11-05T11:40:41.109Z] Copying: 888/1024 [MB] (17 MBps) [2024-11-05T11:40:42.043Z] Copying: 907/1024 [MB] (18 MBps) [2024-11-05T11:40:43.030Z] Copying: 928/1024 [MB] (20 MBps) [2024-11-05T11:40:43.994Z] Copying: 948/1024 [MB] (20 MBps) [2024-11-05T11:40:44.937Z] Copying: 962/1024 [MB] (13 MBps) [2024-11-05T11:40:45.878Z] Copying: 982/1024 [MB] (20 MBps) [2024-11-05T11:40:46.818Z] Copying: 997/1024 [MB] (14 MBps) [2024-11-05T11:40:47.759Z] Copying: 1008/1024 [MB] (11 MBps) [2024-11-05T11:40:48.692Z] Copying: 1023/1024 [MB] (14 MBps) [2024-11-05T11:40:48.692Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-11-05 11:40:48.540322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.540379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:49.418 [2024-11-05 11:40:48.540398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:24:49.418 [2024-11-05 11:40:48.540410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.540437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:49.418 [2024-11-05 11:40:48.543361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.543397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:49.418 [2024-11-05 11:40:48.543414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.905 ms 00:24:49.418 [2024-11-05 11:40:48.543426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.554117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.554172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:49.418 [2024-11-05 11:40:48.554187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.665 ms 00:24:49.418 [2024-11-05 11:40:48.554199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.578993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.579029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:49.418 [2024-11-05 11:40:48.579045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.774 ms 00:24:49.418 [2024-11-05 11:40:48.579056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.585302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.585425] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:49.418 [2024-11-05 11:40:48.585452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.208 ms 00:24:49.418 [2024-11-05 11:40:48.585464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.609814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.609851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:49.418 [2024-11-05 11:40:48.609867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.286 ms 00:24:49.418 [2024-11-05 11:40:48.609878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.418 [2024-11-05 11:40:48.624336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.418 [2024-11-05 11:40:48.624459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:49.418 [2024-11-05 11:40:48.624481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.417 ms 00:24:49.418 [2024-11-05 11:40:48.624493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.797100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.677 [2024-11-05 11:40:48.797145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:49.677 [2024-11-05 11:40:48.797161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 172.525 ms 00:24:49.677 [2024-11-05 11:40:48.797178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.820553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.677 [2024-11-05 11:40:48.820585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:49.677 [2024-11-05 11:40:48.820600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.354 ms 00:24:49.677 [2024-11-05 11:40:48.820611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.843531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.677 [2024-11-05 11:40:48.843651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:49.677 [2024-11-05 11:40:48.843672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.880 ms 00:24:49.677 [2024-11-05 11:40:48.843684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.866713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.677 [2024-11-05 11:40:48.866847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:49.677 [2024-11-05 11:40:48.866866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.950 ms 00:24:49.677 [2024-11-05 11:40:48.866877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.889662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.677 [2024-11-05 11:40:48.889768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:49.677 [2024-11-05 11:40:48.889855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.659 ms 00:24:49.677 [2024-11-05 11:40:48.889893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.677 [2024-11-05 11:40:48.889950] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:49.677 [2024-11-05 11:40:48.889992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106496 / 261120 wr_cnt: 1 state: open 00:24:49.677 [2024-11-05 11:40:48.890046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:49.677 [2024-11-05 11:40:48.890151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:49.677 [2024-11-05 11:40:48.890202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.890938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.891999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.892930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.893991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.894893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:49.678 [2024-11-05 11:40:48.895737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:49.679 [2024-11-05 11:40:48.895845] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:49.679 [2024-11-05 11:40:48.895858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf75b195-0fb7-4624-951a-ddedc5463da0 00:24:49.679 [2024-11-05 11:40:48.895871] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106496 00:24:49.679 [2024-11-05 11:40:48.895883] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107456 00:24:49.679 [2024-11-05 11:40:48.895907] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106496 00:24:49.679 [2024-11-05 11:40:48.895920] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:24:49.679 [2024-11-05 11:40:48.895932] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:49.679 [2024-11-05 11:40:48.895945] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:49.679 [2024-11-05 11:40:48.895957] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:49.679 [2024-11-05 11:40:48.895968] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:49.679 [2024-11-05 11:40:48.895979] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:49.679 [2024-11-05 11:40:48.895992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.679 [2024-11-05 11:40:48.896004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:49.679 [2024-11-05 11:40:48.896017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.042 ms 00:24:49.679 [2024-11-05 11:40:48.896029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.908862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.679 [2024-11-05 11:40:48.908894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:49.679 [2024-11-05 11:40:48.908908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.805 ms 00:24:49.679 [2024-11-05 11:40:48.908919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.909361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:49.679 [2024-11-05 11:40:48.909388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:49.679 [2024-11-05 11:40:48.909402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:24:49.679 [2024-11-05 11:40:48.909413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.942067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.679 [2024-11-05 11:40:48.942101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:49.679 [2024-11-05 11:40:48.942115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.679 [2024-11-05 11:40:48.942126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.942194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.679 [2024-11-05 11:40:48.942207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:49.679 [2024-11-05 11:40:48.942221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.679 [2024-11-05 11:40:48.942234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.942328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.679 [2024-11-05 11:40:48.942343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:49.679 [2024-11-05 11:40:48.942357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.679 [2024-11-05 11:40:48.942369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.679 [2024-11-05 11:40:48.942390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.679 [2024-11-05 11:40:48.942404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:49.679 [2024-11-05 11:40:48.942416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.679 [2024-11-05 11:40:48.942429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.019355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.019394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:49.937 [2024-11-05 11:40:49.019410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.019421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:49.937 [2024-11-05 11:40:49.083204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:49.937 [2024-11-05 11:40:49.083310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:49.937 [2024-11-05 11:40:49.083418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:49.937 [2024-11-05 11:40:49.083589] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:49.937 [2024-11-05 11:40:49.083671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:49.937 [2024-11-05 11:40:49.083761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.083860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:49.937 [2024-11-05 11:40:49.083878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:49.937 [2024-11-05 11:40:49.083891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:49.937 [2024-11-05 11:40:49.083903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:49.937 [2024-11-05 11:40:49.084067] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 543.707 ms, result 0 00:24:51.314 00:24:51.314 00:24:51.314 11:40:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:24:53.251 11:40:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:53.251 [2024-11-05 11:40:52.475874] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:24:53.251 [2024-11-05 11:40:52.476150] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78399 ] 00:24:53.512 [2024-11-05 11:40:52.637053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.512 [2024-11-05 11:40:52.759354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.087 [2024-11-05 11:40:53.053893] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.087 [2024-11-05 11:40:53.053975] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:54.087 [2024-11-05 11:40:53.214550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.214616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:54.087 [2024-11-05 11:40:53.214634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:54.087 [2024-11-05 11:40:53.214643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.214701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.214713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:54.087 [2024-11-05 11:40:53.214724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:54.087 [2024-11-05 11:40:53.214732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.214754] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:54.087 [2024-11-05 11:40:53.215939] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:54.087 [2024-11-05 11:40:53.215995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.216007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:54.087 [2024-11-05 11:40:53.216017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.247 ms 00:24:54.087 [2024-11-05 11:40:53.216026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.217756] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:54.087 [2024-11-05 11:40:53.232033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.232086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:54.087 [2024-11-05 11:40:53.232101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.279 ms 00:24:54.087 [2024-11-05 11:40:53.232109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.232189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.232203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:54.087 [2024-11-05 11:40:53.232213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:24:54.087 [2024-11-05 11:40:53.232220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.240498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.240545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:54.087 [2024-11-05 11:40:53.240556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.197 ms 00:24:54.087 [2024-11-05 11:40:53.240564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.240652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.240662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:54.087 [2024-11-05 11:40:53.240671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:54.087 [2024-11-05 11:40:53.240679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.240723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.240734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:54.087 [2024-11-05 11:40:53.240742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:54.087 [2024-11-05 11:40:53.240750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.240775] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:54.087 [2024-11-05 11:40:53.244988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.245032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:54.087 [2024-11-05 11:40:53.245043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.219 ms 00:24:54.087 [2024-11-05 11:40:53.245054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.245090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.245099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:54.087 [2024-11-05 11:40:53.245108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:24:54.087 [2024-11-05 11:40:53.245116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.245169] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:54.087 [2024-11-05 11:40:53.245191] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:54.087 [2024-11-05 11:40:53.245229] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:54.087 [2024-11-05 11:40:53.245248] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:54.087 [2024-11-05 11:40:53.245353] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:54.087 [2024-11-05 11:40:53.245365] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:54.087 [2024-11-05 11:40:53.245376] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:54.087 [2024-11-05 11:40:53.245387] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:54.087 [2024-11-05 11:40:53.245396] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:54.087 [2024-11-05 11:40:53.245405] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:54.087 [2024-11-05 11:40:53.245413] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:54.087 [2024-11-05 11:40:53.245421] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:54.087 [2024-11-05 11:40:53.245428] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:54.087 [2024-11-05 11:40:53.245439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.245447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:54.087 [2024-11-05 11:40:53.245456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:24:54.087 [2024-11-05 11:40:53.245464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.245547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.087 [2024-11-05 11:40:53.245556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:54.087 [2024-11-05 11:40:53.245564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:54.087 [2024-11-05 11:40:53.245571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.087 [2024-11-05 11:40:53.245675] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:54.088 [2024-11-05 11:40:53.245688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:54.088 [2024-11-05 11:40:53.245697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:54.088 [2024-11-05 11:40:53.245720] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:54.088 [2024-11-05 11:40:53.245744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.088 [2024-11-05 11:40:53.245758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:54.088 [2024-11-05 11:40:53.245765] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:54.088 [2024-11-05 11:40:53.245772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:54.088 [2024-11-05 11:40:53.245779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:54.088 [2024-11-05 11:40:53.245788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:54.088 [2024-11-05 11:40:53.245832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:54.088 [2024-11-05 11:40:53.245848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:54.088 [2024-11-05 11:40:53.245870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245878] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245886] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:54.088 [2024-11-05 11:40:53.245893] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:54.088 [2024-11-05 11:40:53.245914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:54.088 [2024-11-05 11:40:53.245936] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:54.088 [2024-11-05 11:40:53.245950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:54.088 [2024-11-05 11:40:53.245957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:54.088 [2024-11-05 11:40:53.245964] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.088 [2024-11-05 11:40:53.245971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:54.088 [2024-11-05 11:40:53.245979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:54.088 [2024-11-05 11:40:53.245987] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:54.088 [2024-11-05 11:40:53.245994] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:54.088 [2024-11-05 11:40:53.246001] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:54.088 [2024-11-05 11:40:53.246008] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.246015] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:54.088 [2024-11-05 11:40:53.246022] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:54.088 [2024-11-05 11:40:53.246029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.246036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:54.088 [2024-11-05 11:40:53.246045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:54.088 [2024-11-05 11:40:53.246052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:54.088 [2024-11-05 11:40:53.246060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:54.088 [2024-11-05 11:40:53.246069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:54.088 [2024-11-05 11:40:53.246075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:54.088 [2024-11-05 11:40:53.246083] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:54.088 [2024-11-05 11:40:53.246090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:54.088 [2024-11-05 11:40:53.246097] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:54.088 [2024-11-05 11:40:53.246104] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:54.088 [2024-11-05 11:40:53.246112] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:54.088 [2024-11-05 11:40:53.246122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:54.088 [2024-11-05 11:40:53.246139] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:54.088 [2024-11-05 11:40:53.246147] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:54.088 [2024-11-05 11:40:53.246154] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:54.088 [2024-11-05 11:40:53.246162] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:54.088 [2024-11-05 11:40:53.246170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:54.088 [2024-11-05 11:40:53.246177] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:54.088 [2024-11-05 11:40:53.246185] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:54.088 [2024-11-05 11:40:53.246192] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:54.088 [2024-11-05 11:40:53.246199] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246206] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:54.088 [2024-11-05 11:40:53.246234] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:54.088 [2024-11-05 11:40:53.246243] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:54.088 [2024-11-05 11:40:53.246261] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:54.088 [2024-11-05 11:40:53.246269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:54.088 [2024-11-05 11:40:53.246276] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:54.088 [2024-11-05 11:40:53.246284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.246292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:54.088 [2024-11-05 11:40:53.246299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:24:54.088 [2024-11-05 11:40:53.246308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.278656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.278870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:54.088 [2024-11-05 11:40:53.278958] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.304 ms 00:24:54.088 [2024-11-05 11:40:53.278985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.279090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.279122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:54.088 [2024-11-05 11:40:53.279143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:24:54.088 [2024-11-05 11:40:53.279162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.321742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.321971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:54.088 [2024-11-05 11:40:53.322043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.507 ms 00:24:54.088 [2024-11-05 11:40:53.322068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.322132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.322157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:54.088 [2024-11-05 11:40:53.322178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:54.088 [2024-11-05 11:40:53.322205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.322785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.088 [2024-11-05 11:40:53.322967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:54.088 [2024-11-05 11:40:53.323027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.488 ms 00:24:54.088 [2024-11-05 11:40:53.323051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.088 [2024-11-05 11:40:53.323224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.089 [2024-11-05 11:40:53.323250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:54.089 [2024-11-05 11:40:53.323271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:24:54.089 [2024-11-05 11:40:53.323290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.089 [2024-11-05 11:40:53.339080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.089 [2024-11-05 11:40:53.339255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:54.089 [2024-11-05 11:40:53.339313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.751 ms 00:24:54.089 [2024-11-05 11:40:53.339342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.089 [2024-11-05 11:40:53.353754] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:24:54.089 [2024-11-05 11:40:53.353966] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:54.089 [2024-11-05 11:40:53.354034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.089 [2024-11-05 11:40:53.354056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:54.089 [2024-11-05 11:40:53.354078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.565 ms 00:24:54.089 [2024-11-05 11:40:53.354096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.379942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.380127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:54.351 [2024-11-05 11:40:53.380191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.792 ms 00:24:54.351 [2024-11-05 11:40:53.380214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.393160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.393341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:54.351 [2024-11-05 11:40:53.393399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.888 ms 00:24:54.351 [2024-11-05 11:40:53.393422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.406438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.406615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:54.351 [2024-11-05 11:40:53.406673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.887 ms 00:24:54.351 [2024-11-05 11:40:53.406694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.407506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.407587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:54.351 [2024-11-05 11:40:53.407680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.541 ms 00:24:54.351 [2024-11-05 11:40:53.407704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.473335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.473571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:54.351 [2024-11-05 11:40:53.473638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.588 ms 00:24:54.351 [2024-11-05 11:40:53.473671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.484953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:54.351 [2024-11-05 11:40:53.488169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.488325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:54.351 [2024-11-05 11:40:53.488383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:24:54.351 [2024-11-05 11:40:53.488407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.488508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.488538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:54.351 [2024-11-05 11:40:53.488560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:24:54.351 [2024-11-05 11:40:53.488580] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.490361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.490531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:54.351 [2024-11-05 11:40:53.490593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.724 ms 00:24:54.351 [2024-11-05 11:40:53.490606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.490644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.490654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:54.351 [2024-11-05 11:40:53.490664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:24:54.351 [2024-11-05 11:40:53.490673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.490714] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:54.351 [2024-11-05 11:40:53.490729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.490738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:54.351 [2024-11-05 11:40:53.490746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:54.351 [2024-11-05 11:40:53.490755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.516438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.516490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:54.351 [2024-11-05 11:40:53.516504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.663 ms 00:24:54.351 [2024-11-05 11:40:53.516512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.516610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:54.351 [2024-11-05 11:40:53.516620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:54.351 [2024-11-05 11:40:53.516631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:24:54.351 [2024-11-05 11:40:53.516639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:54.351 [2024-11-05 11:40:53.518103] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 303.023 ms, result 0 00:24:55.737  [2024-11-05T11:40:55.954Z] Copying: 1400/1048576 [kB] (1400 kBps) [2024-11-05T11:40:56.898Z] Copying: 5312/1048576 [kB] (3912 kBps) [2024-11-05T11:40:57.841Z] Copying: 20/1024 [MB] (14 MBps) [2024-11-05T11:40:58.777Z] Copying: 51/1024 [MB] (31 MBps) [2024-11-05T11:40:59.713Z] Copying: 96/1024 [MB] (44 MBps) [2024-11-05T11:41:01.091Z] Copying: 124/1024 [MB] (28 MBps) [2024-11-05T11:41:02.035Z] Copying: 152/1024 [MB] (27 MBps) [2024-11-05T11:41:02.980Z] Copying: 181/1024 [MB] (28 MBps) [2024-11-05T11:41:03.923Z] Copying: 212/1024 [MB] (30 MBps) [2024-11-05T11:41:04.897Z] Copying: 237/1024 [MB] (25 MBps) [2024-11-05T11:41:05.838Z] Copying: 268/1024 [MB] (31 MBps) [2024-11-05T11:41:06.779Z] Copying: 296/1024 [MB] (27 MBps) [2024-11-05T11:41:07.722Z] Copying: 327/1024 [MB] (30 MBps) [2024-11-05T11:41:09.102Z] Copying: 358/1024 [MB] (30 MBps) [2024-11-05T11:41:10.035Z] Copying: 389/1024 [MB] (31 MBps) [2024-11-05T11:41:10.967Z] Copying: 418/1024 [MB] (29 MBps) [2024-11-05T11:41:11.908Z] Copying: 448/1024 [MB] (29 MBps) [2024-11-05T11:41:12.917Z] Copying: 476/1024 [MB] (28 MBps) [2024-11-05T11:41:13.857Z] Copying: 503/1024 [MB] (27 MBps) [2024-11-05T11:41:14.790Z] Copying: 528/1024 [MB] (24 MBps) [2024-11-05T11:41:15.722Z] Copying: 555/1024 [MB] (27 MBps) [2024-11-05T11:41:17.106Z] Copying: 585/1024 [MB] (29 MBps) [2024-11-05T11:41:18.038Z] Copying: 617/1024 [MB] (32 MBps) [2024-11-05T11:41:18.975Z] Copying: 644/1024 [MB] (27 MBps) [2024-11-05T11:41:19.906Z] Copying: 674/1024 [MB] (30 MBps) [2024-11-05T11:41:20.837Z] Copying: 707/1024 [MB] (32 MBps) [2024-11-05T11:41:21.772Z] Copying: 730/1024 [MB] (22 MBps) [2024-11-05T11:41:22.744Z] Copying: 761/1024 [MB] (30 MBps) [2024-11-05T11:41:24.117Z] Copying: 791/1024 [MB] (30 MBps) [2024-11-05T11:41:25.058Z] Copying: 820/1024 [MB] (28 MBps) [2024-11-05T11:41:25.993Z] Copying: 851/1024 [MB] (31 MBps) [2024-11-05T11:41:26.929Z] Copying: 881/1024 [MB] (29 MBps) [2024-11-05T11:41:27.869Z] Copying: 912/1024 [MB] (30 MBps) [2024-11-05T11:41:28.818Z] Copying: 939/1024 [MB] (26 MBps) [2024-11-05T11:41:29.748Z] Copying: 957/1024 [MB] (17 MBps) [2024-11-05T11:41:31.124Z] Copying: 973/1024 [MB] (16 MBps) [2024-11-05T11:41:32.064Z] Copying: 990/1024 [MB] (16 MBps) [2024-11-05T11:41:33.007Z] Copying: 1007/1024 [MB] (17 MBps) [2024-11-05T11:41:33.007Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-11-05 11:41:32.787405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.787502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:33.733 [2024-11-05 11:41:32.787540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:33.733 [2024-11-05 11:41:32.787556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.787597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:33.733 [2024-11-05 11:41:32.793179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.793398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:33.733 [2024-11-05 11:41:32.793515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.553 ms 00:25:33.733 [2024-11-05 11:41:32.793558] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.794011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.794072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:33.733 [2024-11-05 11:41:32.794210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:25:33.733 [2024-11-05 11:41:32.794252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.809470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.809665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:33.733 [2024-11-05 11:41:32.809900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.165 ms 00:25:33.733 [2024-11-05 11:41:32.809946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.816257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.816423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:33.733 [2024-11-05 11:41:32.816622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.258 ms 00:25:33.733 [2024-11-05 11:41:32.816679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.844174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.844367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:33.733 [2024-11-05 11:41:32.844432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.431 ms 00:25:33.733 [2024-11-05 11:41:32.844455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.861391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.861586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:33.733 [2024-11-05 11:41:32.861652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.885 ms 00:25:33.733 [2024-11-05 11:41:32.861676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.866266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.866432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:33.733 [2024-11-05 11:41:32.866451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.430 ms 00:25:33.733 [2024-11-05 11:41:32.866461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.893380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.893569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:33.733 [2024-11-05 11:41:32.893591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.899 ms 00:25:33.733 [2024-11-05 11:41:32.893598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.919587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.919638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:33.733 [2024-11-05 11:41:32.919664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.948 ms 00:25:33.733 [2024-11-05 11:41:32.919671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.944838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.944887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:33.733 [2024-11-05 11:41:32.944900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.116 ms 00:25:33.733 [2024-11-05 11:41:32.944907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.970664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.733 [2024-11-05 11:41:32.970713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:33.733 [2024-11-05 11:41:32.970725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.680 ms 00:25:33.733 [2024-11-05 11:41:32.970733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.733 [2024-11-05 11:41:32.970783] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:33.733 [2024-11-05 11:41:32.970797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:25:33.733 [2024-11-05 11:41:32.970840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:25:33.734 [2024-11-05 11:41:32.970849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.970992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:33.734 [2024-11-05 11:41:32.971592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:33.735 [2024-11-05 11:41:32.971687] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:33.735 [2024-11-05 11:41:32.971696] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf75b195-0fb7-4624-951a-ddedc5463da0 00:25:33.735 [2024-11-05 11:41:32.971705] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:25:33.735 [2024-11-05 11:41:32.971713] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 158144 00:25:33.735 [2024-11-05 11:41:32.971721] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 156160 00:25:33.735 [2024-11-05 11:41:32.971730] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:25:33.735 [2024-11-05 11:41:32.971741] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:33.735 [2024-11-05 11:41:32.971751] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:33.735 [2024-11-05 11:41:32.971759] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:33.735 [2024-11-05 11:41:32.971774] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:33.735 [2024-11-05 11:41:32.971780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:33.735 [2024-11-05 11:41:32.971788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.735 [2024-11-05 11:41:32.971796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:33.735 [2024-11-05 11:41:32.971817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.007 ms 00:25:33.735 [2024-11-05 11:41:32.971825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.735 [2024-11-05 11:41:32.985809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.735 [2024-11-05 11:41:32.985989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:33.735 [2024-11-05 11:41:32.986016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.937 ms 00:25:33.735 [2024-11-05 11:41:32.986024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.735 [2024-11-05 11:41:32.986410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:33.735 [2024-11-05 11:41:32.986420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:33.735 [2024-11-05 11:41:32.986429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:25:33.735 [2024-11-05 11:41:32.986436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.995 [2024-11-05 11:41:33.023211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.995 [2024-11-05 11:41:33.023264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:33.995 [2024-11-05 11:41:33.023277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.995 [2024-11-05 11:41:33.023285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.995 [2024-11-05 11:41:33.023347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.995 [2024-11-05 11:41:33.023356] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:33.995 [2024-11-05 11:41:33.023365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.023373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.023464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.023478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:33.996 [2024-11-05 11:41:33.023488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.023496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.023512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.023521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:33.996 [2024-11-05 11:41:33.023529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.023536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.109381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.109640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:33.996 [2024-11-05 11:41:33.109662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.109671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.180352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.180588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:33.996 [2024-11-05 11:41:33.180610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.180619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.180686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.180696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:33.996 [2024-11-05 11:41:33.180712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.180721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.180780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.180791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:33.996 [2024-11-05 11:41:33.180837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.180846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.180957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.180968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:33.996 [2024-11-05 11:41:33.180977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.180990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.181028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.181039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:33.996 [2024-11-05 11:41:33.181048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.181057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.181097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.181106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:33.996 [2024-11-05 11:41:33.181115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.181124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.181173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:33.996 [2024-11-05 11:41:33.181184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:33.996 [2024-11-05 11:41:33.181193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:33.996 [2024-11-05 11:41:33.181201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:33.996 [2024-11-05 11:41:33.181336] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 393.909 ms, result 0 00:25:34.935 00:25:34.935 00:25:34.936 11:41:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:37.509 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:37.509 11:41:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:37.509 [2024-11-05 11:41:36.254479] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:25:37.509 [2024-11-05 11:41:36.254852] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78850 ] 00:25:37.509 [2024-11-05 11:41:36.412491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:37.509 [2024-11-05 11:41:36.535519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:37.771 [2024-11-05 11:41:36.825796] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.771 [2024-11-05 11:41:36.825900] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:37.771 [2024-11-05 11:41:36.988735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:36.989042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:37.771 [2024-11-05 11:41:36.989075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:37.771 [2024-11-05 11:41:36.989084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:36.989156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:36.989167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:37.771 [2024-11-05 11:41:36.989179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:25:37.771 [2024-11-05 11:41:36.989188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:36.989209] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:37.771 [2024-11-05 11:41:36.989955] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:37.771 [2024-11-05 11:41:36.989977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:36.989985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:37.771 [2024-11-05 11:41:36.989995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.773 ms 00:25:37.771 [2024-11-05 11:41:36.990003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:36.991604] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:37.771 [2024-11-05 11:41:37.006030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.006080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:37.771 [2024-11-05 11:41:37.006094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.428 ms 00:25:37.771 [2024-11-05 11:41:37.006102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.006174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.006187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:37.771 [2024-11-05 11:41:37.006197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:25:37.771 [2024-11-05 11:41:37.006204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.014025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.014208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:37.771 [2024-11-05 11:41:37.014227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.742 ms 00:25:37.771 [2024-11-05 11:41:37.014237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.014324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.014334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:37.771 [2024-11-05 11:41:37.014343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:25:37.771 [2024-11-05 11:41:37.014351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.014395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.014406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:37.771 [2024-11-05 11:41:37.014415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:37.771 [2024-11-05 11:41:37.014423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.014446] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:37.771 [2024-11-05 11:41:37.018487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.018525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:37.771 [2024-11-05 11:41:37.018536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.047 ms 00:25:37.771 [2024-11-05 11:41:37.018547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.018582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.771 [2024-11-05 11:41:37.018591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:37.771 [2024-11-05 11:41:37.018600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:25:37.771 [2024-11-05 11:41:37.018608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.771 [2024-11-05 11:41:37.018659] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:37.771 [2024-11-05 11:41:37.018682] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:37.772 [2024-11-05 11:41:37.018719] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:37.772 [2024-11-05 11:41:37.018738] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:37.772 [2024-11-05 11:41:37.018865] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:37.772 [2024-11-05 11:41:37.018878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:37.772 [2024-11-05 11:41:37.018888] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:37.772 [2024-11-05 11:41:37.018900] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:37.772 [2024-11-05 11:41:37.018909] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:37.772 [2024-11-05 11:41:37.018919] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:37.772 [2024-11-05 11:41:37.018926] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:37.772 [2024-11-05 11:41:37.018935] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:37.772 [2024-11-05 11:41:37.018942] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:37.772 [2024-11-05 11:41:37.018954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.772 [2024-11-05 11:41:37.018962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:37.772 [2024-11-05 11:41:37.018981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.298 ms 00:25:37.772 [2024-11-05 11:41:37.018988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.772 [2024-11-05 11:41:37.019073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.772 [2024-11-05 11:41:37.019083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:37.772 [2024-11-05 11:41:37.019091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:37.772 [2024-11-05 11:41:37.019098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:37.772 [2024-11-05 11:41:37.019202] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:37.772 [2024-11-05 11:41:37.019215] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:37.772 [2024-11-05 11:41:37.019223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:37.772 [2024-11-05 11:41:37.019247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019253] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:37.772 [2024-11-05 11:41:37.019267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.772 [2024-11-05 11:41:37.019281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:37.772 [2024-11-05 11:41:37.019289] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:37.772 [2024-11-05 11:41:37.019298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:37.772 [2024-11-05 11:41:37.019305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:37.772 [2024-11-05 11:41:37.019313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:37.772 [2024-11-05 11:41:37.019326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019333] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:37.772 [2024-11-05 11:41:37.019339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:37.772 [2024-11-05 11:41:37.019359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019365] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:37.772 [2024-11-05 11:41:37.019378] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:37.772 [2024-11-05 11:41:37.019399] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:37.772 [2024-11-05 11:41:37.019418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019431] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:37.772 [2024-11-05 11:41:37.019438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.772 [2024-11-05 11:41:37.019452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:37.772 [2024-11-05 11:41:37.019458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:37.772 [2024-11-05 11:41:37.019464] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:37.772 [2024-11-05 11:41:37.019470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:37.772 [2024-11-05 11:41:37.019477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:37.772 [2024-11-05 11:41:37.019483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:37.772 [2024-11-05 11:41:37.019497] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:37.772 [2024-11-05 11:41:37.019503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019510] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:37.772 [2024-11-05 11:41:37.019519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:37.772 [2024-11-05 11:41:37.019527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:37.772 [2024-11-05 11:41:37.019543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:37.772 [2024-11-05 11:41:37.019550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:37.772 [2024-11-05 11:41:37.019558] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:37.772 [2024-11-05 11:41:37.019565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:37.772 [2024-11-05 11:41:37.019571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:37.772 [2024-11-05 11:41:37.019578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:37.772 [2024-11-05 11:41:37.019586] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:37.772 [2024-11-05 11:41:37.019596] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:37.772 [2024-11-05 11:41:37.019614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:37.772 [2024-11-05 11:41:37.019621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:37.772 [2024-11-05 11:41:37.019628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:37.772 [2024-11-05 11:41:37.019636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:37.772 [2024-11-05 11:41:37.019643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:37.772 [2024-11-05 11:41:37.019650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:37.772 [2024-11-05 11:41:37.019657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:37.772 [2024-11-05 11:41:37.019666] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:37.772 [2024-11-05 11:41:37.019672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019680] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019693] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:37.772 [2024-11-05 11:41:37.019709] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:37.772 [2024-11-05 11:41:37.019717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:37.772 [2024-11-05 11:41:37.019735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:37.772 [2024-11-05 11:41:37.019742] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:37.772 [2024-11-05 11:41:37.019749] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:37.773 [2024-11-05 11:41:37.019757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:37.773 [2024-11-05 11:41:37.019770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:37.773 [2024-11-05 11:41:37.019779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.623 ms 00:25:37.773 [2024-11-05 11:41:37.019786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.051075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.051122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:38.037 [2024-11-05 11:41:37.051135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.229 ms 00:25:38.037 [2024-11-05 11:41:37.051145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.051235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.051248] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:38.037 [2024-11-05 11:41:37.051257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:25:38.037 [2024-11-05 11:41:37.051265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.096727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.096778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:38.037 [2024-11-05 11:41:37.096792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.404 ms 00:25:38.037 [2024-11-05 11:41:37.096821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.096870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.096880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:38.037 [2024-11-05 11:41:37.096889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:38.037 [2024-11-05 11:41:37.096900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.097488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.097510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:38.037 [2024-11-05 11:41:37.097522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:25:38.037 [2024-11-05 11:41:37.097530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.097687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.097697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:38.037 [2024-11-05 11:41:37.097706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:25:38.037 [2024-11-05 11:41:37.097714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.113119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.113164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:38.037 [2024-11-05 11:41:37.113175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.381 ms 00:25:38.037 [2024-11-05 11:41:37.113186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.127250] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:38.037 [2024-11-05 11:41:37.127312] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:38.037 [2024-11-05 11:41:37.127326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.127335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:38.037 [2024-11-05 11:41:37.127344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.033 ms 00:25:38.037 [2024-11-05 11:41:37.127352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.152688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.152741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:38.037 [2024-11-05 11:41:37.152752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.285 ms 00:25:38.037 [2024-11-05 11:41:37.152761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.165456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.165499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:38.037 [2024-11-05 11:41:37.165510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.625 ms 00:25:38.037 [2024-11-05 11:41:37.165517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.178155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.178200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:38.037 [2024-11-05 11:41:37.178212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.592 ms 00:25:38.037 [2024-11-05 11:41:37.178219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.178877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.178900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:38.037 [2024-11-05 11:41:37.178911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:25:38.037 [2024-11-05 11:41:37.178919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.242513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.242576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:38.037 [2024-11-05 11:41:37.242592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.572 ms 00:25:38.037 [2024-11-05 11:41:37.242608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.254061] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:38.037 [2024-11-05 11:41:37.257085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.257130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:38.037 [2024-11-05 11:41:37.257142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.418 ms 00:25:38.037 [2024-11-05 11:41:37.257150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.257238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.257251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:38.037 [2024-11-05 11:41:37.257261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:38.037 [2024-11-05 11:41:37.257269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.258288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.258408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:38.037 [2024-11-05 11:41:37.258461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.979 ms 00:25:38.037 [2024-11-05 11:41:37.258485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.258528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.258552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:38.037 [2024-11-05 11:41:37.258573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:38.037 [2024-11-05 11:41:37.258591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.258647] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:38.037 [2024-11-05 11:41:37.258675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.258697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:38.037 [2024-11-05 11:41:37.258755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:25:38.037 [2024-11-05 11:41:37.258778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.284199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.284365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:38.037 [2024-11-05 11:41:37.284424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.369 ms 00:25:38.037 [2024-11-05 11:41:37.284447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.284828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:38.037 [2024-11-05 11:41:37.284907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:38.037 [2024-11-05 11:41:37.284923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:25:38.037 [2024-11-05 11:41:37.284933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:38.037 [2024-11-05 11:41:37.286469] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 296.927 ms, result 0 00:25:39.421  [2024-11-05T11:41:39.638Z] Copying: 11/1024 [MB] (11 MBps) [2024-11-05T11:41:40.580Z] Copying: 25/1024 [MB] (14 MBps) [2024-11-05T11:41:41.524Z] Copying: 39/1024 [MB] (13 MBps) [2024-11-05T11:41:42.466Z] Copying: 51/1024 [MB] (11 MBps) [2024-11-05T11:41:43.850Z] Copying: 67/1024 [MB] (15 MBps) [2024-11-05T11:41:44.791Z] Copying: 87/1024 [MB] (20 MBps) [2024-11-05T11:41:45.728Z] Copying: 109/1024 [MB] (22 MBps) [2024-11-05T11:41:46.671Z] Copying: 125/1024 [MB] (16 MBps) [2024-11-05T11:41:47.612Z] Copying: 149/1024 [MB] (23 MBps) [2024-11-05T11:41:48.557Z] Copying: 169/1024 [MB] (19 MBps) [2024-11-05T11:41:49.496Z] Copying: 188/1024 [MB] (19 MBps) [2024-11-05T11:41:50.874Z] Copying: 213/1024 [MB] (24 MBps) [2024-11-05T11:41:51.809Z] Copying: 233/1024 [MB] (19 MBps) [2024-11-05T11:41:52.754Z] Copying: 252/1024 [MB] (19 MBps) [2024-11-05T11:41:53.732Z] Copying: 276/1024 [MB] (23 MBps) [2024-11-05T11:41:54.672Z] Copying: 293/1024 [MB] (16 MBps) [2024-11-05T11:41:55.613Z] Copying: 305/1024 [MB] (12 MBps) [2024-11-05T11:41:56.554Z] Copying: 316/1024 [MB] (10 MBps) [2024-11-05T11:41:57.495Z] Copying: 326/1024 [MB] (10 MBps) [2024-11-05T11:41:58.880Z] Copying: 337/1024 [MB] (10 MBps) [2024-11-05T11:41:59.824Z] Copying: 350/1024 [MB] (13 MBps) [2024-11-05T11:42:00.768Z] Copying: 370/1024 [MB] (19 MBps) [2024-11-05T11:42:01.712Z] Copying: 382/1024 [MB] (11 MBps) [2024-11-05T11:42:02.655Z] Copying: 392/1024 [MB] (10 MBps) [2024-11-05T11:42:03.597Z] Copying: 402/1024 [MB] (10 MBps) [2024-11-05T11:42:04.539Z] Copying: 412/1024 [MB] (10 MBps) [2024-11-05T11:42:05.479Z] Copying: 424/1024 [MB] (11 MBps) [2024-11-05T11:42:06.865Z] Copying: 435/1024 [MB] (10 MBps) [2024-11-05T11:42:07.809Z] Copying: 445/1024 [MB] (10 MBps) [2024-11-05T11:42:08.776Z] Copying: 456/1024 [MB] (10 MBps) [2024-11-05T11:42:09.720Z] Copying: 467/1024 [MB] (10 MBps) [2024-11-05T11:42:10.663Z] Copying: 477/1024 [MB] (10 MBps) [2024-11-05T11:42:11.606Z] Copying: 488/1024 [MB] (10 MBps) [2024-11-05T11:42:12.542Z] Copying: 498/1024 [MB] (10 MBps) [2024-11-05T11:42:13.487Z] Copying: 510/1024 [MB] (11 MBps) [2024-11-05T11:42:14.875Z] Copying: 521/1024 [MB] (10 MBps) [2024-11-05T11:42:15.498Z] Copying: 532/1024 [MB] (10 MBps) [2024-11-05T11:42:16.887Z] Copying: 542/1024 [MB] (10 MBps) [2024-11-05T11:42:17.461Z] Copying: 552/1024 [MB] (10 MBps) [2024-11-05T11:42:18.845Z] Copying: 563/1024 [MB] (10 MBps) [2024-11-05T11:42:19.789Z] Copying: 573/1024 [MB] (10 MBps) [2024-11-05T11:42:20.731Z] Copying: 584/1024 [MB] (10 MBps) [2024-11-05T11:42:21.672Z] Copying: 594/1024 [MB] (10 MBps) [2024-11-05T11:42:22.634Z] Copying: 604/1024 [MB] (10 MBps) [2024-11-05T11:42:23.576Z] Copying: 615/1024 [MB] (10 MBps) [2024-11-05T11:42:24.520Z] Copying: 625/1024 [MB] (10 MBps) [2024-11-05T11:42:25.461Z] Copying: 636/1024 [MB] (10 MBps) [2024-11-05T11:42:26.845Z] Copying: 647/1024 [MB] (10 MBps) [2024-11-05T11:42:27.787Z] Copying: 658/1024 [MB] (11 MBps) [2024-11-05T11:42:28.728Z] Copying: 668/1024 [MB] (10 MBps) [2024-11-05T11:42:29.671Z] Copying: 679/1024 [MB] (10 MBps) [2024-11-05T11:42:30.610Z] Copying: 689/1024 [MB] (10 MBps) [2024-11-05T11:42:31.551Z] Copying: 702/1024 [MB] (12 MBps) [2024-11-05T11:42:32.494Z] Copying: 712/1024 [MB] (10 MBps) [2024-11-05T11:42:33.879Z] Copying: 722/1024 [MB] (10 MBps) [2024-11-05T11:42:34.823Z] Copying: 733/1024 [MB] (10 MBps) [2024-11-05T11:42:35.765Z] Copying: 745/1024 [MB] (12 MBps) [2024-11-05T11:42:36.717Z] Copying: 756/1024 [MB] (10 MBps) [2024-11-05T11:42:37.670Z] Copying: 766/1024 [MB] (10 MBps) [2024-11-05T11:42:38.613Z] Copying: 776/1024 [MB] (10 MBps) [2024-11-05T11:42:39.554Z] Copying: 788/1024 [MB] (11 MBps) [2024-11-05T11:42:40.494Z] Copying: 798/1024 [MB] (10 MBps) [2024-11-05T11:42:41.882Z] Copying: 808/1024 [MB] (10 MBps) [2024-11-05T11:42:42.827Z] Copying: 838364/1048576 [kB] (10116 kBps) [2024-11-05T11:42:43.772Z] Copying: 829/1024 [MB] (10 MBps) [2024-11-05T11:42:44.724Z] Copying: 839/1024 [MB] (10 MBps) [2024-11-05T11:42:45.667Z] Copying: 850/1024 [MB] (10 MBps) [2024-11-05T11:42:46.611Z] Copying: 860/1024 [MB] (10 MBps) [2024-11-05T11:42:47.553Z] Copying: 871/1024 [MB] (10 MBps) [2024-11-05T11:42:48.497Z] Copying: 884/1024 [MB] (13 MBps) [2024-11-05T11:42:49.881Z] Copying: 898/1024 [MB] (13 MBps) [2024-11-05T11:42:50.827Z] Copying: 920/1024 [MB] (21 MBps) [2024-11-05T11:42:51.776Z] Copying: 931/1024 [MB] (10 MBps) [2024-11-05T11:42:52.718Z] Copying: 941/1024 [MB] (10 MBps) [2024-11-05T11:42:53.661Z] Copying: 952/1024 [MB] (10 MBps) [2024-11-05T11:42:54.605Z] Copying: 974/1024 [MB] (21 MBps) [2024-11-05T11:42:55.550Z] Copying: 996/1024 [MB] (21 MBps) [2024-11-05T11:42:56.495Z] Copying: 1012/1024 [MB] (15 MBps) [2024-11-05T11:42:56.757Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-11-05 11:42:56.530233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.530334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:57.484 [2024-11-05 11:42:56.530354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:57.484 [2024-11-05 11:42:56.530366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.530396] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:57.484 [2024-11-05 11:42:56.534599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.534648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:57.484 [2024-11-05 11:42:56.534663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.182 ms 00:26:57.484 [2024-11-05 11:42:56.534683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.534986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.535001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:57.484 [2024-11-05 11:42:56.535013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:57.484 [2024-11-05 11:42:56.535025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.539561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.539594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:57.484 [2024-11-05 11:42:56.539608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.518 ms 00:26:57.484 [2024-11-05 11:42:56.539617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.545833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.545876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:57.484 [2024-11-05 11:42:56.545888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.188 ms 00:26:57.484 [2024-11-05 11:42:56.545896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.572607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.572659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:57.484 [2024-11-05 11:42:56.572673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.639 ms 00:26:57.484 [2024-11-05 11:42:56.572680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.588556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.588763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:57.484 [2024-11-05 11:42:56.588787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.825 ms 00:26:57.484 [2024-11-05 11:42:56.588796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.593465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.593602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:57.484 [2024-11-05 11:42:56.593695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:26:57.484 [2024-11-05 11:42:56.593722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.619560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.619741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:57.484 [2024-11-05 11:42:56.620201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.800 ms 00:26:57.484 [2024-11-05 11:42:56.620254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.645512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.645712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:57.484 [2024-11-05 11:42:56.645867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.094 ms 00:26:57.484 [2024-11-05 11:42:56.645900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.670842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.671016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:57.484 [2024-11-05 11:42:56.671100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.886 ms 00:26:57.484 [2024-11-05 11:42:56.671123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.695754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.484 [2024-11-05 11:42:56.695938] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:57.484 [2024-11-05 11:42:56.696001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.537 ms 00:26:57.484 [2024-11-05 11:42:56.696022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.484 [2024-11-05 11:42:56.696348] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:57.484 [2024-11-05 11:42:56.696445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:57.484 [2024-11-05 11:42:56.696490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:57.484 [2024-11-05 11:42:56.696520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.696979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.697903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.698082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.698119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.698149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:57.484 [2024-11-05 11:42:56.698177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:57.485 [2024-11-05 11:42:56.698910] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:57.485 [2024-11-05 11:42:56.698919] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: bf75b195-0fb7-4624-951a-ddedc5463da0 00:26:57.485 [2024-11-05 11:42:56.698931] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:57.485 [2024-11-05 11:42:56.698938] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:57.485 [2024-11-05 11:42:56.698946] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:57.485 [2024-11-05 11:42:56.698955] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:57.485 [2024-11-05 11:42:56.698963] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:57.485 [2024-11-05 11:42:56.698971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:57.485 [2024-11-05 11:42:56.698986] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:57.485 [2024-11-05 11:42:56.698993] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:57.485 [2024-11-05 11:42:56.699000] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:57.485 [2024-11-05 11:42:56.699011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.485 [2024-11-05 11:42:56.699020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:57.485 [2024-11-05 11:42:56.699030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.668 ms 00:26:57.485 [2024-11-05 11:42:56.699037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.712732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.485 [2024-11-05 11:42:56.712779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:57.485 [2024-11-05 11:42:56.712792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.625 ms 00:26:57.485 [2024-11-05 11:42:56.712827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.713225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:57.485 [2024-11-05 11:42:56.713235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:57.485 [2024-11-05 11:42:56.713245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.372 ms 00:26:57.485 [2024-11-05 11:42:56.713260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.749810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.485 [2024-11-05 11:42:56.749859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:57.485 [2024-11-05 11:42:56.749871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.485 [2024-11-05 11:42:56.749879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.749940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.485 [2024-11-05 11:42:56.749949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:57.485 [2024-11-05 11:42:56.749957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.485 [2024-11-05 11:42:56.749973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.750067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.485 [2024-11-05 11:42:56.750079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:57.485 [2024-11-05 11:42:56.750089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.485 [2024-11-05 11:42:56.750097] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.485 [2024-11-05 11:42:56.750112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.485 [2024-11-05 11:42:56.750121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:57.485 [2024-11-05 11:42:56.750129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.485 [2024-11-05 11:42:56.750137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.835721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.835966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:57.748 [2024-11-05 11:42:56.835991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.836000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:57.748 [2024-11-05 11:42:56.906113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:57.748 [2024-11-05 11:42:56.906216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:57.748 [2024-11-05 11:42:56.906308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:57.748 [2024-11-05 11:42:56.906435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:57.748 [2024-11-05 11:42:56.906496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:57.748 [2024-11-05 11:42:56.906569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:57.748 [2024-11-05 11:42:56.906640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:57.748 [2024-11-05 11:42:56.906650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:57.748 [2024-11-05 11:42:56.906659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:57.748 [2024-11-05 11:42:56.906789] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 376.529 ms, result 0 00:26:58.317 00:26:58.317 00:26:58.578 11:42:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.141 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:01.141 11:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:01.141 11:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:01.141 11:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:01.141 11:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:01.141 11:42:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:01.141 Process with pid 77146 is not found 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 77146 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 77146 ']' 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 77146 00:27:01.141 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (77146) - No such process 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 77146 is not found' 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:01.141 Remove shared memory files 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:01.141 ************************************ 00:27:01.141 END TEST ftl_dirty_shutdown 00:27:01.141 ************************************ 00:27:01.141 00:27:01.141 real 4m3.899s 00:27:01.141 user 4m18.108s 00:27:01.141 sys 0m22.763s 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:01.141 11:43:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.141 11:43:00 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:01.141 11:43:00 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:01.141 11:43:00 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:01.141 11:43:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:01.403 ************************************ 00:27:01.403 START TEST ftl_upgrade_shutdown 00:27:01.403 ************************************ 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:01.403 * Looking for test storage... 00:27:01.403 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:01.403 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:01.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.404 --rc genhtml_branch_coverage=1 00:27:01.404 --rc genhtml_function_coverage=1 00:27:01.404 --rc genhtml_legend=1 00:27:01.404 --rc geninfo_all_blocks=1 00:27:01.404 --rc geninfo_unexecuted_blocks=1 00:27:01.404 00:27:01.404 ' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:01.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.404 --rc genhtml_branch_coverage=1 00:27:01.404 --rc genhtml_function_coverage=1 00:27:01.404 --rc genhtml_legend=1 00:27:01.404 --rc geninfo_all_blocks=1 00:27:01.404 --rc geninfo_unexecuted_blocks=1 00:27:01.404 00:27:01.404 ' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:01.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.404 --rc genhtml_branch_coverage=1 00:27:01.404 --rc genhtml_function_coverage=1 00:27:01.404 --rc genhtml_legend=1 00:27:01.404 --rc geninfo_all_blocks=1 00:27:01.404 --rc geninfo_unexecuted_blocks=1 00:27:01.404 00:27:01.404 ' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:01.404 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.404 --rc genhtml_branch_coverage=1 00:27:01.404 --rc genhtml_function_coverage=1 00:27:01.404 --rc genhtml_legend=1 00:27:01.404 --rc geninfo_all_blocks=1 00:27:01.404 --rc geninfo_unexecuted_blocks=1 00:27:01.404 00:27:01.404 ' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:01.404 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=79773 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 79773 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79773 ']' 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:01.405 11:43:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:01.666 [2024-11-05 11:43:00.690493] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:01.666 [2024-11-05 11:43:00.690904] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79773 ] 00:27:01.666 [2024-11-05 11:43:00.856994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.926 [2024-11-05 11:43:00.985868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:02.500 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:02.761 11:43:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:03.021 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:03.021 { 00:27:03.021 "name": "basen1", 00:27:03.021 "aliases": [ 00:27:03.021 "6d10958d-79fa-45e3-aaf9-729f1403ec92" 00:27:03.021 ], 00:27:03.021 "product_name": "NVMe disk", 00:27:03.021 "block_size": 4096, 00:27:03.021 "num_blocks": 1310720, 00:27:03.021 "uuid": "6d10958d-79fa-45e3-aaf9-729f1403ec92", 00:27:03.021 "numa_id": -1, 00:27:03.021 "assigned_rate_limits": { 00:27:03.021 "rw_ios_per_sec": 0, 00:27:03.021 "rw_mbytes_per_sec": 0, 00:27:03.021 "r_mbytes_per_sec": 0, 00:27:03.021 "w_mbytes_per_sec": 0 00:27:03.021 }, 00:27:03.021 "claimed": true, 00:27:03.021 "claim_type": "read_many_write_one", 00:27:03.021 "zoned": false, 00:27:03.021 "supported_io_types": { 00:27:03.021 "read": true, 00:27:03.021 "write": true, 00:27:03.021 "unmap": true, 00:27:03.021 "flush": true, 00:27:03.021 "reset": true, 00:27:03.021 "nvme_admin": true, 00:27:03.021 "nvme_io": true, 00:27:03.021 "nvme_io_md": false, 00:27:03.021 "write_zeroes": true, 00:27:03.021 "zcopy": false, 00:27:03.021 "get_zone_info": false, 00:27:03.021 "zone_management": false, 00:27:03.021 "zone_append": false, 00:27:03.021 "compare": true, 00:27:03.021 "compare_and_write": false, 00:27:03.021 "abort": true, 00:27:03.021 "seek_hole": false, 00:27:03.021 "seek_data": false, 00:27:03.021 "copy": true, 00:27:03.021 "nvme_iov_md": false 00:27:03.021 }, 00:27:03.021 "driver_specific": { 00:27:03.021 "nvme": [ 00:27:03.021 { 00:27:03.021 "pci_address": "0000:00:11.0", 00:27:03.021 "trid": { 00:27:03.021 "trtype": "PCIe", 00:27:03.021 "traddr": "0000:00:11.0" 00:27:03.021 }, 00:27:03.021 "ctrlr_data": { 00:27:03.021 "cntlid": 0, 00:27:03.021 "vendor_id": "0x1b36", 00:27:03.021 "model_number": "QEMU NVMe Ctrl", 00:27:03.021 "serial_number": "12341", 00:27:03.021 "firmware_revision": "8.0.0", 00:27:03.021 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:03.021 "oacs": { 00:27:03.021 "security": 0, 00:27:03.021 "format": 1, 00:27:03.021 "firmware": 0, 00:27:03.021 "ns_manage": 1 00:27:03.021 }, 00:27:03.022 "multi_ctrlr": false, 00:27:03.022 "ana_reporting": false 00:27:03.022 }, 00:27:03.022 "vs": { 00:27:03.022 "nvme_version": "1.4" 00:27:03.022 }, 00:27:03.022 "ns_data": { 00:27:03.022 "id": 1, 00:27:03.022 "can_share": false 00:27:03.022 } 00:27:03.022 } 00:27:03.022 ], 00:27:03.022 "mp_policy": "active_passive" 00:27:03.022 } 00:27:03.022 } 00:27:03.022 ]' 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:03.022 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:03.283 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=05e9054d-1a83-4474-b955-7fd811825a79 00:27:03.283 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:03.283 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 05e9054d-1a83-4474-b955-7fd811825a79 00:27:03.543 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:03.804 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1ea019db-28f4-4b0c-9c62-53a1649f25ab 00:27:03.804 11:43:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1ea019db-28f4-4b0c-9c62-53a1649f25ab 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 3c31069b-7b7e-493d-9fe8-feac4c3bc4ef ]] 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 5120 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:04.065 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 3c31069b-7b7e-493d-9fe8-feac4c3bc4ef 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:04.326 { 00:27:04.326 "name": "3c31069b-7b7e-493d-9fe8-feac4c3bc4ef", 00:27:04.326 "aliases": [ 00:27:04.326 "lvs/basen1p0" 00:27:04.326 ], 00:27:04.326 "product_name": "Logical Volume", 00:27:04.326 "block_size": 4096, 00:27:04.326 "num_blocks": 5242880, 00:27:04.326 "uuid": "3c31069b-7b7e-493d-9fe8-feac4c3bc4ef", 00:27:04.326 "assigned_rate_limits": { 00:27:04.326 "rw_ios_per_sec": 0, 00:27:04.326 "rw_mbytes_per_sec": 0, 00:27:04.326 "r_mbytes_per_sec": 0, 00:27:04.326 "w_mbytes_per_sec": 0 00:27:04.326 }, 00:27:04.326 "claimed": false, 00:27:04.326 "zoned": false, 00:27:04.326 "supported_io_types": { 00:27:04.326 "read": true, 00:27:04.326 "write": true, 00:27:04.326 "unmap": true, 00:27:04.326 "flush": false, 00:27:04.326 "reset": true, 00:27:04.326 "nvme_admin": false, 00:27:04.326 "nvme_io": false, 00:27:04.326 "nvme_io_md": false, 00:27:04.326 "write_zeroes": true, 00:27:04.326 "zcopy": false, 00:27:04.326 "get_zone_info": false, 00:27:04.326 "zone_management": false, 00:27:04.326 "zone_append": false, 00:27:04.326 "compare": false, 00:27:04.326 "compare_and_write": false, 00:27:04.326 "abort": false, 00:27:04.326 "seek_hole": true, 00:27:04.326 "seek_data": true, 00:27:04.326 "copy": false, 00:27:04.326 "nvme_iov_md": false 00:27:04.326 }, 00:27:04.326 "driver_specific": { 00:27:04.326 "lvol": { 00:27:04.326 "lvol_store_uuid": "1ea019db-28f4-4b0c-9c62-53a1649f25ab", 00:27:04.326 "base_bdev": "basen1", 00:27:04.326 "thin_provision": true, 00:27:04.326 "num_allocated_clusters": 0, 00:27:04.326 "snapshot": false, 00:27:04.326 "clone": false, 00:27:04.326 "esnap_clone": false 00:27:04.326 } 00:27:04.326 } 00:27:04.326 } 00:27:04.326 ]' 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:04.326 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:04.594 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:04.594 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:04.594 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:04.856 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:04.856 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:04.856 11:43:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 3c31069b-7b7e-493d-9fe8-feac4c3bc4ef -c cachen1p0 --l2p_dram_limit 2 00:27:04.856 [2024-11-05 11:43:04.039961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.040096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:04.856 [2024-11-05 11:43:04.040116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:04.856 [2024-11-05 11:43:04.040124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.040179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.040188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:04.856 [2024-11-05 11:43:04.040196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:27:04.856 [2024-11-05 11:43:04.040202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.040220] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:04.856 [2024-11-05 11:43:04.040852] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:04.856 [2024-11-05 11:43:04.040876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.040883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:04.856 [2024-11-05 11:43:04.040892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.658 ms 00:27:04.856 [2024-11-05 11:43:04.040897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.041130] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 18392b83-52a5-494d-b471-a234a8c3b327 00:27:04.856 [2024-11-05 11:43:04.042103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.042134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:04.856 [2024-11-05 11:43:04.042141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:27:04.856 [2024-11-05 11:43:04.042149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.046936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.047058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:04.856 [2024-11-05 11:43:04.047071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.752 ms 00:27:04.856 [2024-11-05 11:43:04.047080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.047112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.047120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:04.856 [2024-11-05 11:43:04.047126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:04.856 [2024-11-05 11:43:04.047135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.047189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.047200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:04.856 [2024-11-05 11:43:04.047207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:04.856 [2024-11-05 11:43:04.047214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.047234] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:04.856 [2024-11-05 11:43:04.050116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.050217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:04.856 [2024-11-05 11:43:04.050231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.888 ms 00:27:04.856 [2024-11-05 11:43:04.050241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.050264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.050271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:04.856 [2024-11-05 11:43:04.050278] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:04.856 [2024-11-05 11:43:04.050284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.050298] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:04.856 [2024-11-05 11:43:04.050404] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:04.856 [2024-11-05 11:43:04.050416] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:04.856 [2024-11-05 11:43:04.050425] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:04.856 [2024-11-05 11:43:04.050434] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:04.856 [2024-11-05 11:43:04.050441] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:04.856 [2024-11-05 11:43:04.050449] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:04.856 [2024-11-05 11:43:04.050454] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:04.856 [2024-11-05 11:43:04.050462] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:04.856 [2024-11-05 11:43:04.050467] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:04.856 [2024-11-05 11:43:04.050476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.050481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:04.856 [2024-11-05 11:43:04.050489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:27:04.856 [2024-11-05 11:43:04.050495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.856 [2024-11-05 11:43:04.050559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.856 [2024-11-05 11:43:04.050565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:04.856 [2024-11-05 11:43:04.050573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:04.856 [2024-11-05 11:43:04.050583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.857 [2024-11-05 11:43:04.050657] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:04.857 [2024-11-05 11:43:04.050666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:04.857 [2024-11-05 11:43:04.050673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:04.857 [2024-11-05 11:43:04.050691] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050697] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:04.857 [2024-11-05 11:43:04.050703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:04.857 [2024-11-05 11:43:04.050709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:04.857 [2024-11-05 11:43:04.050714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:04.857 [2024-11-05 11:43:04.050725] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:04.857 [2024-11-05 11:43:04.050732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:04.857 [2024-11-05 11:43:04.050743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:04.857 [2024-11-05 11:43:04.050748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:04.857 [2024-11-05 11:43:04.050761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:04.857 [2024-11-05 11:43:04.050767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:04.857 [2024-11-05 11:43:04.050781] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:04.857 [2024-11-05 11:43:04.050786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050792] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:04.857 [2024-11-05 11:43:04.050798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:04.857 [2024-11-05 11:43:04.050823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:04.857 [2024-11-05 11:43:04.050841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:04.857 [2024-11-05 11:43:04.050849] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:04.857 [2024-11-05 11:43:04.050866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:04.857 [2024-11-05 11:43:04.050875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:04.857 [2024-11-05 11:43:04.050895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:04.857 [2024-11-05 11:43:04.050901] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:04.857 [2024-11-05 11:43:04.050912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:04.857 [2024-11-05 11:43:04.050931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:04.857 [2024-11-05 11:43:04.050947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:04.857 [2024-11-05 11:43:04.050953] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050958] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:04.857 [2024-11-05 11:43:04.050965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:04.857 [2024-11-05 11:43:04.050971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:04.857 [2024-11-05 11:43:04.050978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:04.857 [2024-11-05 11:43:04.050984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:04.857 [2024-11-05 11:43:04.050992] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:04.857 [2024-11-05 11:43:04.050997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:04.857 [2024-11-05 11:43:04.051004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:04.857 [2024-11-05 11:43:04.051009] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:04.857 [2024-11-05 11:43:04.051017] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:04.857 [2024-11-05 11:43:04.051025] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:04.857 [2024-11-05 11:43:04.051033] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:04.857 [2024-11-05 11:43:04.051047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051073] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:04.857 [2024-11-05 11:43:04.051078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:04.857 [2024-11-05 11:43:04.051085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:04.857 [2024-11-05 11:43:04.051090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:04.857 [2024-11-05 11:43:04.051097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051116] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:04.857 [2024-11-05 11:43:04.051140] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:04.857 [2024-11-05 11:43:04.051148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051156] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:04.857 [2024-11-05 11:43:04.051163] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:04.857 [2024-11-05 11:43:04.051168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:04.857 [2024-11-05 11:43:04.051175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:04.857 [2024-11-05 11:43:04.051181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:04.857 [2024-11-05 11:43:04.051188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:04.857 [2024-11-05 11:43:04.051194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.577 ms 00:27:04.857 [2024-11-05 11:43:04.051201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:04.857 [2024-11-05 11:43:04.051244] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:04.857 [2024-11-05 11:43:04.051258] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:09.065 [2024-11-05 11:43:08.082061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.082161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:09.065 [2024-11-05 11:43:08.082180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4030.799 ms 00:27:09.065 [2024-11-05 11:43:08.082192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.115045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.115124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:09.065 [2024-11-05 11:43:08.115139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.597 ms 00:27:09.065 [2024-11-05 11:43:08.115151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.115247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.115262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:09.065 [2024-11-05 11:43:08.115271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:09.065 [2024-11-05 11:43:08.115284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.151018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.151085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:09.065 [2024-11-05 11:43:08.151099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.698 ms 00:27:09.065 [2024-11-05 11:43:08.151110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.151149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.151160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:09.065 [2024-11-05 11:43:08.151169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:09.065 [2024-11-05 11:43:08.151182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.151842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.151874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:09.065 [2024-11-05 11:43:08.151886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.585 ms 00:27:09.065 [2024-11-05 11:43:08.151897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.151954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.151967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:09.065 [2024-11-05 11:43:08.151976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:09.065 [2024-11-05 11:43:08.151989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.169624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.169683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:09.065 [2024-11-05 11:43:08.169696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.611 ms 00:27:09.065 [2024-11-05 11:43:08.169709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.183038] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:09.065 [2024-11-05 11:43:08.184374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.184422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:09.065 [2024-11-05 11:43:08.184436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.568 ms 00:27:09.065 [2024-11-05 11:43:08.184444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.223198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.223262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:09.065 [2024-11-05 11:43:08.223282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.713 ms 00:27:09.065 [2024-11-05 11:43:08.223291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.223406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.223418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:09.065 [2024-11-05 11:43:08.223434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:27:09.065 [2024-11-05 11:43:08.223446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.249112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.249170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:09.065 [2024-11-05 11:43:08.249186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.601 ms 00:27:09.065 [2024-11-05 11:43:08.249197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.275156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.275380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:09.065 [2024-11-05 11:43:08.275409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.897 ms 00:27:09.065 [2024-11-05 11:43:08.275417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.065 [2024-11-05 11:43:08.276067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.065 [2024-11-05 11:43:08.276090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:09.065 [2024-11-05 11:43:08.276104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.601 ms 00:27:09.065 [2024-11-05 11:43:08.276112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.367156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.367215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:09.325 [2024-11-05 11:43:08.367236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 90.977 ms 00:27:09.325 [2024-11-05 11:43:08.367246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.395262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.395317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:09.325 [2024-11-05 11:43:08.395346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.924 ms 00:27:09.325 [2024-11-05 11:43:08.395354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.422010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.422220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:09.325 [2024-11-05 11:43:08.422248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.597 ms 00:27:09.325 [2024-11-05 11:43:08.422257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.456070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.456129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:09.325 [2024-11-05 11:43:08.456147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.423 ms 00:27:09.325 [2024-11-05 11:43:08.456155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.456215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.456226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:09.325 [2024-11-05 11:43:08.456248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:27:09.325 [2024-11-05 11:43:08.456256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.456368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:09.325 [2024-11-05 11:43:08.456379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:09.325 [2024-11-05 11:43:08.456390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.040 ms 00:27:09.325 [2024-11-05 11:43:08.456398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:09.325 [2024-11-05 11:43:08.457569] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4417.115 ms, result 0 00:27:09.325 { 00:27:09.325 "name": "ftl", 00:27:09.325 "uuid": "18392b83-52a5-494d-b471-a234a8c3b327" 00:27:09.325 } 00:27:09.325 11:43:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:09.586 [2024-11-05 11:43:08.680692] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:09.586 11:43:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:09.847 11:43:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:09.847 [2024-11-05 11:43:09.109166] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:10.109 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:10.109 [2024-11-05 11:43:09.314515] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:10.109 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:10.681 Fill FTL, iteration 1 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=79901 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 79901 /var/tmp/spdk.tgt.sock 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 79901 ']' 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:10.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:10.681 11:43:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:10.681 [2024-11-05 11:43:09.734920] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:10.681 [2024-11-05 11:43:09.735227] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79901 ] 00:27:10.681 [2024-11-05 11:43:09.894182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.942 [2024-11-05 11:43:09.988434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.515 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:11.515 11:43:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:11.515 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:11.776 ftln1 00:27:11.776 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:11.776 11:43:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 79901 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79901 ']' 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79901 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79901 00:27:11.776 killing process with pid 79901 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79901' 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79901 00:27:11.776 11:43:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79901 00:27:13.705 11:43:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:13.705 11:43:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:13.705 [2024-11-05 11:43:12.501187] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:13.705 [2024-11-05 11:43:12.501301] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79949 ] 00:27:13.705 [2024-11-05 11:43:12.658732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:13.705 [2024-11-05 11:43:12.751067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:15.103  [2024-11-05T11:43:15.318Z] Copying: 196/1024 [MB] (196 MBps) [2024-11-05T11:43:16.260Z] Copying: 438/1024 [MB] (242 MBps) [2024-11-05T11:43:17.202Z] Copying: 695/1024 [MB] (257 MBps) [2024-11-05T11:43:17.464Z] Copying: 952/1024 [MB] (257 MBps) [2024-11-05T11:43:18.036Z] Copying: 1024/1024 [MB] (average 238 MBps) 00:27:18.762 00:27:18.762 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:18.762 Calculate MD5 checksum, iteration 1 00:27:18.762 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:18.762 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.762 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:18.762 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:18.763 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:18.763 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:18.763 11:43:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:18.763 [2024-11-05 11:43:18.016851] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:18.763 [2024-11-05 11:43:18.017418] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80002 ] 00:27:19.024 [2024-11-05 11:43:18.172870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.024 [2024-11-05 11:43:18.248322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:20.412  [2024-11-05T11:43:20.272Z] Copying: 673/1024 [MB] (673 MBps) [2024-11-05T11:43:20.534Z] Copying: 1024/1024 [MB] (average 694 MBps) 00:27:21.260 00:27:21.260 11:43:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:21.260 11:43:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:23.809 Fill FTL, iteration 2 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=fcd470ead6d4a21d3f921bef22a78a4d 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:23.809 11:43:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:23.809 [2024-11-05 11:43:22.742996] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:23.809 [2024-11-05 11:43:22.743283] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80060 ] 00:27:23.809 [2024-11-05 11:43:22.898638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.809 [2024-11-05 11:43:22.991667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:25.193  [2024-11-05T11:43:25.406Z] Copying: 188/1024 [MB] (188 MBps) [2024-11-05T11:43:26.349Z] Copying: 401/1024 [MB] (213 MBps) [2024-11-05T11:43:27.737Z] Copying: 661/1024 [MB] (260 MBps) [2024-11-05T11:43:27.998Z] Copying: 921/1024 [MB] (260 MBps) [2024-11-05T11:43:28.569Z] Copying: 1024/1024 [MB] (average 231 MBps) 00:27:29.295 00:27:29.295 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:29.295 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:29.295 Calculate MD5 checksum, iteration 2 00:27:29.295 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.295 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:29.295 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:29.296 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:29.296 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:29.296 11:43:28 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:29.296 [2024-11-05 11:43:28.393141] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:29.296 [2024-11-05 11:43:28.393407] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80119 ] 00:27:29.296 [2024-11-05 11:43:28.543531] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.556 [2024-11-05 11:43:28.622436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.950  [2024-11-05T11:43:30.796Z] Copying: 677/1024 [MB] (677 MBps) [2024-11-05T11:43:31.740Z] Copying: 1024/1024 [MB] (average 687 MBps) 00:27:32.466 00:27:32.466 11:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:32.466 11:43:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=dd572ae63df9e81c8c75f08b69b024a5 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:34.383 [2024-11-05 11:43:33.629043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.383 [2024-11-05 11:43:33.629082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.383 [2024-11-05 11:43:33.629094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:27:34.383 [2024-11-05 11:43:33.629101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.383 [2024-11-05 11:43:33.629119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.383 [2024-11-05 11:43:33.629126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.383 [2024-11-05 11:43:33.629133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.383 [2024-11-05 11:43:33.629139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.383 [2024-11-05 11:43:33.629157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.383 [2024-11-05 11:43:33.629164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.383 [2024-11-05 11:43:33.629171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.383 [2024-11-05 11:43:33.629177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.383 [2024-11-05 11:43:33.629226] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.175 ms, result 0 00:27:34.383 true 00:27:34.383 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:34.643 { 00:27:34.643 "name": "ftl", 00:27:34.643 "properties": [ 00:27:34.643 { 00:27:34.643 "name": "superblock_version", 00:27:34.643 "value": 5, 00:27:34.643 "read-only": true 00:27:34.643 }, 00:27:34.643 { 00:27:34.643 "name": "base_device", 00:27:34.643 "bands": [ 00:27:34.643 { 00:27:34.643 "id": 0, 00:27:34.643 "state": "FREE", 00:27:34.643 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 1, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 2, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 3, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 4, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 5, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 6, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 7, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 8, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 9, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 10, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 11, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 12, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 13, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 14, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 15, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 16, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 17, 00:27:34.644 "state": "FREE", 00:27:34.644 "validity": 0.0 00:27:34.644 } 00:27:34.644 ], 00:27:34.644 "read-only": true 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "name": "cache_device", 00:27:34.644 "type": "bdev", 00:27:34.644 "chunks": [ 00:27:34.644 { 00:27:34.644 "id": 0, 00:27:34.644 "state": "INACTIVE", 00:27:34.644 "utilization": 0.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 1, 00:27:34.644 "state": "CLOSED", 00:27:34.644 "utilization": 1.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 2, 00:27:34.644 "state": "CLOSED", 00:27:34.644 "utilization": 1.0 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 3, 00:27:34.644 "state": "OPEN", 00:27:34.644 "utilization": 0.001953125 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "id": 4, 00:27:34.644 "state": "OPEN", 00:27:34.644 "utilization": 0.0 00:27:34.644 } 00:27:34.644 ], 00:27:34.644 "read-only": true 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "name": "verbose_mode", 00:27:34.644 "value": true, 00:27:34.644 "unit": "", 00:27:34.644 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:34.644 }, 00:27:34.644 { 00:27:34.644 "name": "prep_upgrade_on_shutdown", 00:27:34.644 "value": false, 00:27:34.644 "unit": "", 00:27:34.644 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:34.644 } 00:27:34.644 ] 00:27:34.644 } 00:27:34.644 11:43:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:27:34.904 [2024-11-05 11:43:33.993327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.904 [2024-11-05 11:43:33.993359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:34.904 [2024-11-05 11:43:33.993368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:34.904 [2024-11-05 11:43:33.993374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.904 [2024-11-05 11:43:33.993390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.904 [2024-11-05 11:43:33.993396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:34.904 [2024-11-05 11:43:33.993402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.904 [2024-11-05 11:43:33.993408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.904 [2024-11-05 11:43:33.993422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:34.904 [2024-11-05 11:43:33.993428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:34.904 [2024-11-05 11:43:33.993434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:34.904 [2024-11-05 11:43:33.993439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:34.904 [2024-11-05 11:43:33.993481] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.145 ms, result 0 00:27:34.904 true 00:27:34.904 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:34.904 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:27:34.904 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.166 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:27:35.166 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:27:35.166 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:35.166 [2024-11-05 11:43:34.357597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.166 [2024-11-05 11:43:34.357630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:35.166 [2024-11-05 11:43:34.357638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:35.166 [2024-11-05 11:43:34.357644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.166 [2024-11-05 11:43:34.357661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.166 [2024-11-05 11:43:34.357667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:35.166 [2024-11-05 11:43:34.357672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.166 [2024-11-05 11:43:34.357678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.166 [2024-11-05 11:43:34.357692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:35.166 [2024-11-05 11:43:34.357698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:35.166 [2024-11-05 11:43:34.357704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:27:35.166 [2024-11-05 11:43:34.357710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:35.166 [2024-11-05 11:43:34.357750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.143 ms, result 0 00:27:35.166 true 00:27:35.166 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:35.430 { 00:27:35.430 "name": "ftl", 00:27:35.430 "properties": [ 00:27:35.430 { 00:27:35.430 "name": "superblock_version", 00:27:35.430 "value": 5, 00:27:35.430 "read-only": true 00:27:35.430 }, 00:27:35.430 { 00:27:35.430 "name": "base_device", 00:27:35.430 "bands": [ 00:27:35.430 { 00:27:35.430 "id": 0, 00:27:35.430 "state": "FREE", 00:27:35.430 "validity": 0.0 00:27:35.430 }, 00:27:35.430 { 00:27:35.430 "id": 1, 00:27:35.430 "state": "FREE", 00:27:35.430 "validity": 0.0 00:27:35.430 }, 00:27:35.430 { 00:27:35.430 "id": 2, 00:27:35.430 "state": "FREE", 00:27:35.430 "validity": 0.0 00:27:35.430 }, 00:27:35.430 { 00:27:35.430 "id": 3, 00:27:35.430 "state": "FREE", 00:27:35.430 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 4, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 5, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 6, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 7, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 8, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 9, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 10, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 11, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 12, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 13, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 14, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 15, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 16, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 17, 00:27:35.431 "state": "FREE", 00:27:35.431 "validity": 0.0 00:27:35.431 } 00:27:35.431 ], 00:27:35.431 "read-only": true 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "name": "cache_device", 00:27:35.431 "type": "bdev", 00:27:35.431 "chunks": [ 00:27:35.431 { 00:27:35.431 "id": 0, 00:27:35.431 "state": "INACTIVE", 00:27:35.431 "utilization": 0.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 1, 00:27:35.431 "state": "CLOSED", 00:27:35.431 "utilization": 1.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 2, 00:27:35.431 "state": "CLOSED", 00:27:35.431 "utilization": 1.0 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 3, 00:27:35.431 "state": "OPEN", 00:27:35.431 "utilization": 0.001953125 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "id": 4, 00:27:35.431 "state": "OPEN", 00:27:35.431 "utilization": 0.0 00:27:35.431 } 00:27:35.431 ], 00:27:35.431 "read-only": true 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "name": "verbose_mode", 00:27:35.431 "value": true, 00:27:35.431 "unit": "", 00:27:35.431 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:35.431 }, 00:27:35.431 { 00:27:35.431 "name": "prep_upgrade_on_shutdown", 00:27:35.431 "value": true, 00:27:35.431 "unit": "", 00:27:35.431 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:35.431 } 00:27:35.431 ] 00:27:35.431 } 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 79773 ]] 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 79773 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 79773 ']' 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 79773 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 79773 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:35.431 killing process with pid 79773 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 79773' 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 79773 00:27:35.431 11:43:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 79773 00:27:36.033 [2024-11-05 11:43:35.127236] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:27:36.033 [2024-11-05 11:43:35.139118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.033 [2024-11-05 11:43:35.139151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:27:36.033 [2024-11-05 11:43:35.139161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:36.033 [2024-11-05 11:43:35.139167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:36.033 [2024-11-05 11:43:35.139183] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:27:36.033 [2024-11-05 11:43:35.141289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:36.033 [2024-11-05 11:43:35.141313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:27:36.033 [2024-11-05 11:43:35.141321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.096 ms 00:27:36.033 [2024-11-05 11:43:35.141328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.053 [2024-11-05 11:43:43.778531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.778594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:27:46.054 [2024-11-05 11:43:43.778606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8637.142 ms 00:27:46.054 [2024-11-05 11:43:43.778612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.779567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.779595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:27:46.054 [2024-11-05 11:43:43.779603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.942 ms 00:27:46.054 [2024-11-05 11:43:43.779609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.780486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.780507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:27:46.054 [2024-11-05 11:43:43.780515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.856 ms 00:27:46.054 [2024-11-05 11:43:43.780522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.788128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.788157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:27:46.054 [2024-11-05 11:43:43.788164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.576 ms 00:27:46.054 [2024-11-05 11:43:43.788170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.793623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.793652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:27:46.054 [2024-11-05 11:43:43.793661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 5.427 ms 00:27:46.054 [2024-11-05 11:43:43.793668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.793723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.793731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:27:46.054 [2024-11-05 11:43:43.793737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:46.054 [2024-11-05 11:43:43.793743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.800647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.800675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:27:46.054 [2024-11-05 11:43:43.800683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.888 ms 00:27:46.054 [2024-11-05 11:43:43.800688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.807681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.807708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:27:46.054 [2024-11-05 11:43:43.807715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.969 ms 00:27:46.054 [2024-11-05 11:43:43.807720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.814697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.814724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:27:46.054 [2024-11-05 11:43:43.814731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.950 ms 00:27:46.054 [2024-11-05 11:43:43.814737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.821630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.821658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:27:46.054 [2024-11-05 11:43:43.821664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.846 ms 00:27:46.054 [2024-11-05 11:43:43.821670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.821694] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:27:46.054 [2024-11-05 11:43:43.821704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.054 [2024-11-05 11:43:43.821711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:27:46.054 [2024-11-05 11:43:43.821724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:27:46.054 [2024-11-05 11:43:43.821732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:46.054 [2024-11-05 11:43:43.821829] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:27:46.054 [2024-11-05 11:43:43.821835] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 18392b83-52a5-494d-b471-a234a8c3b327 00:27:46.054 [2024-11-05 11:43:43.821841] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:27:46.054 [2024-11-05 11:43:43.821846] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:27:46.054 [2024-11-05 11:43:43.821852] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:27:46.054 [2024-11-05 11:43:43.821859] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:27:46.054 [2024-11-05 11:43:43.821864] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:27:46.054 [2024-11-05 11:43:43.821870] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:27:46.054 [2024-11-05 11:43:43.821876] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:27:46.054 [2024-11-05 11:43:43.821881] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:27:46.054 [2024-11-05 11:43:43.821885] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:27:46.054 [2024-11-05 11:43:43.821891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.821899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:27:46.054 [2024-11-05 11:43:43.821908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.198 ms 00:27:46.054 [2024-11-05 11:43:43.821913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.831398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.831426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:27:46.054 [2024-11-05 11:43:43.831434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.472 ms 00:27:46.054 [2024-11-05 11:43:43.831440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.831710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:46.054 [2024-11-05 11:43:43.831717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:27:46.054 [2024-11-05 11:43:43.831723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.256 ms 00:27:46.054 [2024-11-05 11:43:43.831728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.864621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.864650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:46.054 [2024-11-05 11:43:43.864658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.054 [2024-11-05 11:43:43.864664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.864690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.864697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:46.054 [2024-11-05 11:43:43.864703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.054 [2024-11-05 11:43:43.864709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.864760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.864768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:46.054 [2024-11-05 11:43:43.864774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.054 [2024-11-05 11:43:43.864779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.864792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.864815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:46.054 [2024-11-05 11:43:43.864821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.054 [2024-11-05 11:43:43.864827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.924931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.924963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:46.054 [2024-11-05 11:43:43.924971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.054 [2024-11-05 11:43:43.924977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.054 [2024-11-05 11:43:43.973044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.054 [2024-11-05 11:43:43.973076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:46.054 [2024-11-05 11:43:43.973085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:46.055 [2024-11-05 11:43:43.973171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:46.055 [2024-11-05 11:43:43.973225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:46.055 [2024-11-05 11:43:43.973310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:27:46.055 [2024-11-05 11:43:43.973353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:46.055 [2024-11-05 11:43:43.973401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:27:46.055 [2024-11-05 11:43:43.973447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:46.055 [2024-11-05 11:43:43.973456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:27:46.055 [2024-11-05 11:43:43.973461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:46.055 [2024-11-05 11:43:43.973551] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8834.388 ms, result 0 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80313 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80313 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80313 ']' 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:46.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:46.316 11:43:45 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:46.577 [2024-11-05 11:43:45.653828] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:46.577 [2024-11-05 11:43:45.653944] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80313 ] 00:27:46.577 [2024-11-05 11:43:45.808929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.838 [2024-11-05 11:43:45.886091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.410 [2024-11-05 11:43:46.452482] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.410 [2024-11-05 11:43:46.452534] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:27:47.410 [2024-11-05 11:43:46.595368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.410 [2024-11-05 11:43:46.595407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:47.410 [2024-11-05 11:43:46.595418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:47.410 [2024-11-05 11:43:46.595425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.410 [2024-11-05 11:43:46.595463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.410 [2024-11-05 11:43:46.595471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:47.410 [2024-11-05 11:43:46.595478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:27:47.410 [2024-11-05 11:43:46.595485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.410 [2024-11-05 11:43:46.595503] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:47.410 [2024-11-05 11:43:46.596009] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:47.410 [2024-11-05 11:43:46.596021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.596027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:47.411 [2024-11-05 11:43:46.596034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:27:47.411 [2024-11-05 11:43:46.596040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.597006] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:27:47.411 [2024-11-05 11:43:46.606681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.606712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:27:47.411 [2024-11-05 11:43:46.606721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.677 ms 00:27:47.411 [2024-11-05 11:43:46.606731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.606775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.606783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:27:47.411 [2024-11-05 11:43:46.606789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:27:47.411 [2024-11-05 11:43:46.606795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.611255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.611285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:47.411 [2024-11-05 11:43:46.611292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.402 ms 00:27:47.411 [2024-11-05 11:43:46.611298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.611338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.611346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:47.411 [2024-11-05 11:43:46.611352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:27:47.411 [2024-11-05 11:43:46.611358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.611391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.611398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:47.411 [2024-11-05 11:43:46.611404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:47.411 [2024-11-05 11:43:46.611412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.611428] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:47.411 [2024-11-05 11:43:46.614006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.614032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:47.411 [2024-11-05 11:43:46.614040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.582 ms 00:27:47.411 [2024-11-05 11:43:46.614045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.614069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.614076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:47.411 [2024-11-05 11:43:46.614082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:47.411 [2024-11-05 11:43:46.614087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.614102] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:27:47.411 [2024-11-05 11:43:46.614116] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:27:47.411 [2024-11-05 11:43:46.614143] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:27:47.411 [2024-11-05 11:43:46.614155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:27:47.411 [2024-11-05 11:43:46.614234] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:47.411 [2024-11-05 11:43:46.614241] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:47.411 [2024-11-05 11:43:46.614249] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:47.411 [2024-11-05 11:43:46.614256] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614263] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614269] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:47.411 [2024-11-05 11:43:46.614276] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:47.411 [2024-11-05 11:43:46.614282] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:47.411 [2024-11-05 11:43:46.614287] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:47.411 [2024-11-05 11:43:46.614293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.614298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:47.411 [2024-11-05 11:43:46.614304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:27:47.411 [2024-11-05 11:43:46.614309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.614373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.411 [2024-11-05 11:43:46.614380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:47.411 [2024-11-05 11:43:46.614385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:27:47.411 [2024-11-05 11:43:46.614392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.411 [2024-11-05 11:43:46.614466] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:47.411 [2024-11-05 11:43:46.614473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:47.411 [2024-11-05 11:43:46.614479] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614491] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:47.411 [2024-11-05 11:43:46.614496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:47.411 [2024-11-05 11:43:46.614506] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:47.411 [2024-11-05 11:43:46.614511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:47.411 [2024-11-05 11:43:46.614516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:47.411 [2024-11-05 11:43:46.614526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:47.411 [2024-11-05 11:43:46.614531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:47.411 [2024-11-05 11:43:46.614542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:47.411 [2024-11-05 11:43:46.614548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:47.411 [2024-11-05 11:43:46.614559] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:47.411 [2024-11-05 11:43:46.614563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:47.411 [2024-11-05 11:43:46.614573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:47.411 [2024-11-05 11:43:46.614578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:47.411 [2024-11-05 11:43:46.614589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:47.411 [2024-11-05 11:43:46.614594] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:47.411 [2024-11-05 11:43:46.614608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:47.411 [2024-11-05 11:43:46.614613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:47.411 [2024-11-05 11:43:46.614622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:47.411 [2024-11-05 11:43:46.614627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614632] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:47.411 [2024-11-05 11:43:46.614637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:47.411 [2024-11-05 11:43:46.614642] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614647] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:47.411 [2024-11-05 11:43:46.614652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:47.411 [2024-11-05 11:43:46.614666] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614675] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:47.411 [2024-11-05 11:43:46.614680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:47.411 [2024-11-05 11:43:46.614685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.411 [2024-11-05 11:43:46.614689] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:47.411 [2024-11-05 11:43:46.614695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:47.411 [2024-11-05 11:43:46.614701] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:47.411 [2024-11-05 11:43:46.614706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:47.412 [2024-11-05 11:43:46.614713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:47.412 [2024-11-05 11:43:46.614718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:47.412 [2024-11-05 11:43:46.614723] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:47.412 [2024-11-05 11:43:46.614728] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:47.412 [2024-11-05 11:43:46.614733] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:47.412 [2024-11-05 11:43:46.614739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:47.412 [2024-11-05 11:43:46.614745] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:47.412 [2024-11-05 11:43:46.614753] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:47.412 [2024-11-05 11:43:46.614764] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614775] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:47.412 [2024-11-05 11:43:46.614780] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:47.412 [2024-11-05 11:43:46.614785] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:47.412 [2024-11-05 11:43:46.614790] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:47.412 [2024-11-05 11:43:46.614796] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614811] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614827] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:47.412 [2024-11-05 11:43:46.614843] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:47.412 [2024-11-05 11:43:46.614849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614855] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:47.412 [2024-11-05 11:43:46.614861] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:47.412 [2024-11-05 11:43:46.614866] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:47.412 [2024-11-05 11:43:46.614871] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:47.412 [2024-11-05 11:43:46.614877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:47.412 [2024-11-05 11:43:46.614883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:47.412 [2024-11-05 11:43:46.614889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.464 ms 00:27:47.412 [2024-11-05 11:43:46.614894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:47.412 [2024-11-05 11:43:46.614928] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:47.412 [2024-11-05 11:43:46.614936] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:51.617 [2024-11-05 11:43:50.513586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.513672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:51.617 [2024-11-05 11:43:50.513690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3898.641 ms 00:27:51.617 [2024-11-05 11:43:50.513699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.544789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.544870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:51.617 [2024-11-05 11:43:50.544885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.847 ms 00:27:51.617 [2024-11-05 11:43:50.544894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.544992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.545004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:51.617 [2024-11-05 11:43:50.545021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:27:51.617 [2024-11-05 11:43:50.545029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.580366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.580422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:51.617 [2024-11-05 11:43:50.580434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.297 ms 00:27:51.617 [2024-11-05 11:43:50.580443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.580485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.580494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:51.617 [2024-11-05 11:43:50.580503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:27:51.617 [2024-11-05 11:43:50.580512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.581142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.581167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:51.617 [2024-11-05 11:43:50.581178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.567 ms 00:27:51.617 [2024-11-05 11:43:50.581187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.581240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.581249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:51.617 [2024-11-05 11:43:50.581258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:27:51.617 [2024-11-05 11:43:50.581266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.617 [2024-11-05 11:43:50.598872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.617 [2024-11-05 11:43:50.598921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:51.617 [2024-11-05 11:43:50.598932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.579 ms 00:27:51.617 [2024-11-05 11:43:50.598941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.613341] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:27:51.618 [2024-11-05 11:43:50.613395] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:27:51.618 [2024-11-05 11:43:50.613409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.613419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:27:51.618 [2024-11-05 11:43:50.613430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.327 ms 00:27:51.618 [2024-11-05 11:43:50.613437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.628666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.628716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:27:51.618 [2024-11-05 11:43:50.628729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 15.198 ms 00:27:51.618 [2024-11-05 11:43:50.628738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.641459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.641506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:27:51.618 [2024-11-05 11:43:50.641519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.664 ms 00:27:51.618 [2024-11-05 11:43:50.641528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.653932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.653993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:27:51.618 [2024-11-05 11:43:50.654005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.375 ms 00:27:51.618 [2024-11-05 11:43:50.654013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.654635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.654658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:51.618 [2024-11-05 11:43:50.654673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:27:51.618 [2024-11-05 11:43:50.654681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.732282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.732364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:27:51.618 [2024-11-05 11:43:50.732381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 77.578 ms 00:27:51.618 [2024-11-05 11:43:50.732390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.743663] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:51.618 [2024-11-05 11:43:50.744707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.744746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:51.618 [2024-11-05 11:43:50.744759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.251 ms 00:27:51.618 [2024-11-05 11:43:50.744769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.744902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.744915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:27:51.618 [2024-11-05 11:43:50.744928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:27:51.618 [2024-11-05 11:43:50.744937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.745004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.745016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:51.618 [2024-11-05 11:43:50.745025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:27:51.618 [2024-11-05 11:43:50.745034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.745057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.745065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:51.618 [2024-11-05 11:43:50.745075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:51.618 [2024-11-05 11:43:50.745087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.745124] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:27:51.618 [2024-11-05 11:43:50.745134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.745143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:27:51.618 [2024-11-05 11:43:50.745151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:27:51.618 [2024-11-05 11:43:50.745159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.770923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.770973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:51.618 [2024-11-05 11:43:50.770993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.742 ms 00:27:51.618 [2024-11-05 11:43:50.771002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.771116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.618 [2024-11-05 11:43:50.771127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:51.618 [2024-11-05 11:43:50.771136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:51.618 [2024-11-05 11:43:50.771144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.618 [2024-11-05 11:43:50.772539] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4176.633 ms, result 0 00:27:51.618 [2024-11-05 11:43:50.787385] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.618 [2024-11-05 11:43:50.803369] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:51.618 [2024-11-05 11:43:50.811564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:51.618 11:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:51.618 11:43:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:51.618 11:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:51.618 11:43:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:27:51.618 11:43:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:51.880 [2024-11-05 11:43:51.063650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.880 [2024-11-05 11:43:51.063706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:51.880 [2024-11-05 11:43:51.063720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:27:51.880 [2024-11-05 11:43:51.063729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.880 [2024-11-05 11:43:51.063759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.880 [2024-11-05 11:43:51.063769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:51.880 [2024-11-05 11:43:51.063778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:51.880 [2024-11-05 11:43:51.063786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.880 [2024-11-05 11:43:51.063820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:51.880 [2024-11-05 11:43:51.063829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:51.880 [2024-11-05 11:43:51.063838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:27:51.880 [2024-11-05 11:43:51.063847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:51.880 [2024-11-05 11:43:51.063915] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.252 ms, result 0 00:27:51.880 true 00:27:51.880 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.142 { 00:27:52.142 "name": "ftl", 00:27:52.142 "properties": [ 00:27:52.142 { 00:27:52.142 "name": "superblock_version", 00:27:52.142 "value": 5, 00:27:52.142 "read-only": true 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": "base_device", 00:27:52.142 "bands": [ 00:27:52.142 { 00:27:52.142 "id": 0, 00:27:52.142 "state": "CLOSED", 00:27:52.142 "validity": 1.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 1, 00:27:52.142 "state": "CLOSED", 00:27:52.142 "validity": 1.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 2, 00:27:52.142 "state": "CLOSED", 00:27:52.142 "validity": 0.007843137254901933 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 3, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 4, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 5, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 6, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 7, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 8, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 9, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 10, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 11, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 12, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 13, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 14, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 15, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 16, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 17, 00:27:52.142 "state": "FREE", 00:27:52.142 "validity": 0.0 00:27:52.142 } 00:27:52.142 ], 00:27:52.142 "read-only": true 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": "cache_device", 00:27:52.142 "type": "bdev", 00:27:52.142 "chunks": [ 00:27:52.142 { 00:27:52.142 "id": 0, 00:27:52.142 "state": "INACTIVE", 00:27:52.142 "utilization": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 1, 00:27:52.142 "state": "OPEN", 00:27:52.142 "utilization": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 2, 00:27:52.142 "state": "OPEN", 00:27:52.142 "utilization": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 3, 00:27:52.142 "state": "FREE", 00:27:52.142 "utilization": 0.0 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "id": 4, 00:27:52.142 "state": "FREE", 00:27:52.142 "utilization": 0.0 00:27:52.142 } 00:27:52.142 ], 00:27:52.142 "read-only": true 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": "verbose_mode", 00:27:52.142 "value": true, 00:27:52.142 "unit": "", 00:27:52.142 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:52.142 }, 00:27:52.142 { 00:27:52.142 "name": "prep_upgrade_on_shutdown", 00:27:52.142 "value": false, 00:27:52.142 "unit": "", 00:27:52.142 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:52.142 } 00:27:52.142 ] 00:27:52.142 } 00:27:52.142 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:27:52.142 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.142 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:27:52.404 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:27:52.404 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:27:52.404 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:27:52.404 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:52.404 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:27:52.666 Validate MD5 checksum, iteration 1 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:52.666 11:43:51 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:52.666 [2024-11-05 11:43:51.801649] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:52.666 [2024-11-05 11:43:51.802073] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80393 ] 00:27:52.927 [2024-11-05 11:43:51.965751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.927 [2024-11-05 11:43:52.094400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:54.842  [2024-11-05T11:43:54.378Z] Copying: 620/1024 [MB] (620 MBps) [2024-11-05T11:43:55.762Z] Copying: 1024/1024 [MB] (average 600 MBps) 00:27:56.488 00:27:56.488 11:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:27:56.488 11:43:55 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fcd470ead6d4a21d3f921bef22a78a4d 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fcd470ead6d4a21d3f921bef22a78a4d != \f\c\d\4\7\0\e\a\d\6\d\4\a\2\1\d\3\f\9\2\1\b\e\f\2\2\a\7\8\a\4\d ]] 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:27:58.404 Validate MD5 checksum, iteration 2 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:58.404 11:43:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:58.404 [2024-11-05 11:43:57.598051] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:27:58.404 [2024-11-05 11:43:57.598161] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80461 ] 00:27:58.664 [2024-11-05 11:43:57.757706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.665 [2024-11-05 11:43:57.852690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.578  [2024-11-05T11:44:00.113Z] Copying: 641/1024 [MB] (641 MBps) [2024-11-05T11:44:01.052Z] Copying: 1024/1024 [MB] (average 631 MBps) 00:28:01.778 00:28:01.778 11:44:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:01.778 11:44:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dd572ae63df9e81c8c75f08b69b024a5 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dd572ae63df9e81c8c75f08b69b024a5 != \d\d\5\7\2\a\e\6\3\d\f\9\e\8\1\c\8\c\7\5\f\0\8\b\6\9\b\0\2\4\a\5 ]] 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 80313 ]] 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 80313 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80522 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80522 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80522 ']' 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:03.688 11:44:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:03.949 [2024-11-05 11:44:02.990856] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:28:03.949 [2024-11-05 11:44:02.990983] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80522 ] 00:28:03.950 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 80313 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:03.950 [2024-11-05 11:44:03.146562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:04.211 [2024-11-05 11:44:03.233557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.783 [2024-11-05 11:44:03.798203] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:04.783 [2024-11-05 11:44:03.798253] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:04.783 [2024-11-05 11:44:03.941053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.783 [2024-11-05 11:44:03.941089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:04.783 [2024-11-05 11:44:03.941099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.783 [2024-11-05 11:44:03.941105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.783 [2024-11-05 11:44:03.941142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.783 [2024-11-05 11:44:03.941150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:04.783 [2024-11-05 11:44:03.941157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:28:04.783 [2024-11-05 11:44:03.941162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.783 [2024-11-05 11:44:03.941180] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:04.783 [2024-11-05 11:44:03.941688] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:04.783 [2024-11-05 11:44:03.941700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.783 [2024-11-05 11:44:03.941706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:04.783 [2024-11-05 11:44:03.941712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.527 ms 00:28:04.783 [2024-11-05 11:44:03.941718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.783 [2024-11-05 11:44:03.941998] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:04.783 [2024-11-05 11:44:03.954451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.954480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:04.784 [2024-11-05 11:44:03.954490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.452 ms 00:28:04.784 [2024-11-05 11:44:03.954497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.961399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.961428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:04.784 [2024-11-05 11:44:03.961438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:04.784 [2024-11-05 11:44:03.961444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.961685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.961694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:04.784 [2024-11-05 11:44:03.961701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.182 ms 00:28:04.784 [2024-11-05 11:44:03.961706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.961742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.961750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:04.784 [2024-11-05 11:44:03.961757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:04.784 [2024-11-05 11:44:03.961762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.961781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.961787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:04.784 [2024-11-05 11:44:03.961793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:04.784 [2024-11-05 11:44:03.961799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.961830] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:04.784 [2024-11-05 11:44:03.964116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.964150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:04.784 [2024-11-05 11:44:03.964157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.290 ms 00:28:04.784 [2024-11-05 11:44:03.964162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.964184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.964193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:04.784 [2024-11-05 11:44:03.964199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:04.784 [2024-11-05 11:44:03.964205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.964220] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:04.784 [2024-11-05 11:44:03.964234] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:04.784 [2024-11-05 11:44:03.964260] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:04.784 [2024-11-05 11:44:03.964271] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:04.784 [2024-11-05 11:44:03.964352] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:04.784 [2024-11-05 11:44:03.964359] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:04.784 [2024-11-05 11:44:03.964367] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:04.784 [2024-11-05 11:44:03.964375] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964381] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964388] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:04.784 [2024-11-05 11:44:03.964393] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:04.784 [2024-11-05 11:44:03.964399] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:04.784 [2024-11-05 11:44:03.964404] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:04.784 [2024-11-05 11:44:03.964410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.964416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:04.784 [2024-11-05 11:44:03.964423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:28:04.784 [2024-11-05 11:44:03.964429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.964493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.784 [2024-11-05 11:44:03.964499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:04.784 [2024-11-05 11:44:03.964505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:28:04.784 [2024-11-05 11:44:03.964510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.784 [2024-11-05 11:44:03.964585] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:04.784 [2024-11-05 11:44:03.964592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:04.784 [2024-11-05 11:44:03.964598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964606] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964611] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:04.784 [2024-11-05 11:44:03.964616] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:04.784 [2024-11-05 11:44:03.964628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:04.784 [2024-11-05 11:44:03.964633] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:04.784 [2024-11-05 11:44:03.964638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964643] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:04.784 [2024-11-05 11:44:03.964649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:04.784 [2024-11-05 11:44:03.964654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:04.784 [2024-11-05 11:44:03.964664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:04.784 [2024-11-05 11:44:03.964669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964674] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:04.784 [2024-11-05 11:44:03.964679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:04.784 [2024-11-05 11:44:03.964683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:04.784 [2024-11-05 11:44:03.964693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964703] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:04.784 [2024-11-05 11:44:03.964712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964722] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:04.784 [2024-11-05 11:44:03.964727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:04.784 [2024-11-05 11:44:03.964741] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:04.784 [2024-11-05 11:44:03.964756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:04.784 [2024-11-05 11:44:03.964771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:04.784 [2024-11-05 11:44:03.964786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:04.784 [2024-11-05 11:44:03.964809] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:04.784 [2024-11-05 11:44:03.964815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964822] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:04.784 [2024-11-05 11:44:03.964829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:04.784 [2024-11-05 11:44:03.964834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:04.784 [2024-11-05 11:44:03.964845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:04.784 [2024-11-05 11:44:03.964851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:04.784 [2024-11-05 11:44:03.964855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:04.784 [2024-11-05 11:44:03.964860] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:04.784 [2024-11-05 11:44:03.964865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:04.784 [2024-11-05 11:44:03.964871] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:04.784 [2024-11-05 11:44:03.964877] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:04.785 [2024-11-05 11:44:03.964884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:04.785 [2024-11-05 11:44:03.964895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:04.785 [2024-11-05 11:44:03.964911] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:04.785 [2024-11-05 11:44:03.964917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:04.785 [2024-11-05 11:44:03.964922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:04.785 [2024-11-05 11:44:03.964927] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964950] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964955] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964961] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:04.785 [2024-11-05 11:44:03.964966] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:04.785 [2024-11-05 11:44:03.964974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964980] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:04.785 [2024-11-05 11:44:03.964986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:04.785 [2024-11-05 11:44:03.964991] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:04.785 [2024-11-05 11:44:03.964997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:04.785 [2024-11-05 11:44:03.965003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:03.965011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:04.785 [2024-11-05 11:44:03.965016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:28:04.785 [2024-11-05 11:44:03.965021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:03.983965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:03.983994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:04.785 [2024-11-05 11:44:03.984002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.906 ms 00:28:04.785 [2024-11-05 11:44:03.984008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:03.984036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:03.984043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:04.785 [2024-11-05 11:44:03.984050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:28:04.785 [2024-11-05 11:44:03.984056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.007715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.007742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:04.785 [2024-11-05 11:44:04.007750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.620 ms 00:28:04.785 [2024-11-05 11:44:04.007756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.007776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.007783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:04.785 [2024-11-05 11:44:04.007789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:04.785 [2024-11-05 11:44:04.007795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.007873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.007881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:04.785 [2024-11-05 11:44:04.007887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:28:04.785 [2024-11-05 11:44:04.007893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.007923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.007929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:04.785 [2024-11-05 11:44:04.007936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:04.785 [2024-11-05 11:44:04.007941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.019311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.019340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:04.785 [2024-11-05 11:44:04.019348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.354 ms 00:28:04.785 [2024-11-05 11:44:04.019354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.019429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.019438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:04.785 [2024-11-05 11:44:04.019445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:04.785 [2024-11-05 11:44:04.019450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.043279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.043330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:04.785 [2024-11-05 11:44:04.043347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.813 ms 00:28:04.785 [2024-11-05 11:44:04.043359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:04.785 [2024-11-05 11:44:04.052166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:04.785 [2024-11-05 11:44:04.052192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:04.785 [2024-11-05 11:44:04.052200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.402 ms 00:28:04.785 [2024-11-05 11:44:04.052210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.094771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.094816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:05.047 [2024-11-05 11:44:04.094830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.502 ms 00:28:05.047 [2024-11-05 11:44:04.094837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.094955] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:05.047 [2024-11-05 11:44:04.095047] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:05.047 [2024-11-05 11:44:04.095143] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:05.047 [2024-11-05 11:44:04.095226] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:05.047 [2024-11-05 11:44:04.095234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.095240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:05.047 [2024-11-05 11:44:04.095247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.362 ms 00:28:05.047 [2024-11-05 11:44:04.095252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.095295] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:05.047 [2024-11-05 11:44:04.095303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.095309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:05.047 [2024-11-05 11:44:04.095319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:05.047 [2024-11-05 11:44:04.095325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.106408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.106439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:05.047 [2024-11-05 11:44:04.106450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.066 ms 00:28:05.047 [2024-11-05 11:44:04.106456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.112877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.112904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:05.047 [2024-11-05 11:44:04.112912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:05.047 [2024-11-05 11:44:04.112919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.047 [2024-11-05 11:44:04.112984] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:05.047 [2024-11-05 11:44:04.113095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.047 [2024-11-05 11:44:04.113106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:05.047 [2024-11-05 11:44:04.113113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.112 ms 00:28:05.047 [2024-11-05 11:44:04.113119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.308 [2024-11-05 11:44:04.571552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.308 [2024-11-05 11:44:04.571621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:05.308 [2024-11-05 11:44:04.571637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 457.783 ms 00:28:05.308 [2024-11-05 11:44:04.571645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.308 [2024-11-05 11:44:04.576228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.308 [2024-11-05 11:44:04.576264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:05.308 [2024-11-05 11:44:04.576275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.660 ms 00:28:05.308 [2024-11-05 11:44:04.576285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.308 [2024-11-05 11:44:04.577112] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:05.308 [2024-11-05 11:44:04.577148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.308 [2024-11-05 11:44:04.577158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:05.308 [2024-11-05 11:44:04.577168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.835 ms 00:28:05.308 [2024-11-05 11:44:04.577176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.308 [2024-11-05 11:44:04.577208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.308 [2024-11-05 11:44:04.577218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:05.308 [2024-11-05 11:44:04.577227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:05.308 [2024-11-05 11:44:04.577235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:05.308 [2024-11-05 11:44:04.577272] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 464.284 ms, result 0 00:28:05.308 [2024-11-05 11:44:04.577311] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:05.308 [2024-11-05 11:44:04.577380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:05.308 [2024-11-05 11:44:04.577391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:05.308 [2024-11-05 11:44:04.577400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:05.308 [2024-11-05 11:44:04.577408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.169479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.169550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:06.251 [2024-11-05 11:44:05.169566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 591.182 ms 00:28:06.251 [2024-11-05 11:44:05.169575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.174008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.174048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:06.251 [2024-11-05 11:44:05.174060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.328 ms 00:28:06.251 [2024-11-05 11:44:05.174068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.174753] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:06.251 [2024-11-05 11:44:05.174792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.174815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:06.251 [2024-11-05 11:44:05.174825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.695 ms 00:28:06.251 [2024-11-05 11:44:05.174833] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.174865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.174875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:06.251 [2024-11-05 11:44:05.174885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:06.251 [2024-11-05 11:44:05.174892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.174930] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 597.612 ms, result 0 00:28:06.251 [2024-11-05 11:44:05.174973] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:06.251 [2024-11-05 11:44:05.174985] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:06.251 [2024-11-05 11:44:05.174995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.175004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:06.251 [2024-11-05 11:44:05.175013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1062.026 ms 00:28:06.251 [2024-11-05 11:44:05.175022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.175053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.175062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:06.251 [2024-11-05 11:44:05.175075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.251 [2024-11-05 11:44:05.175083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.187082] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:06.251 [2024-11-05 11:44:05.187232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.187244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:06.251 [2024-11-05 11:44:05.187255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.133 ms 00:28:06.251 [2024-11-05 11:44:05.187264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.187985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.188006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:06.251 [2024-11-05 11:44:05.188017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.642 ms 00:28:06.251 [2024-11-05 11:44:05.188028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:06.251 [2024-11-05 11:44:05.190287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.207 ms 00:28:06.251 [2024-11-05 11:44:05.190295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:06.251 [2024-11-05 11:44:05.190352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:06.251 [2024-11-05 11:44:05.190359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:06.251 [2024-11-05 11:44:05.190488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:06.251 [2024-11-05 11:44:05.190496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:06.251 [2024-11-05 11:44:05.190534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:06.251 [2024-11-05 11:44:05.190541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190569] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:06.251 [2024-11-05 11:44:05.190583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:06.251 [2024-11-05 11:44:05.190598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:06.251 [2024-11-05 11:44:05.190606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.190660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:06.251 [2024-11-05 11:44:05.190670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:06.251 [2024-11-05 11:44:05.190678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.035 ms 00:28:06.251 [2024-11-05 11:44:05.190685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:06.251 [2024-11-05 11:44:05.191713] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1250.171 ms, result 0 00:28:06.251 [2024-11-05 11:44:05.204135] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:06.251 [2024-11-05 11:44:05.220139] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:06.251 [2024-11-05 11:44:05.229021] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:06.513 Validate MD5 checksum, iteration 1 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:06.513 11:44:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:06.513 [2024-11-05 11:44:05.609336] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:28:06.513 [2024-11-05 11:44:05.609480] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80559 ] 00:28:06.513 [2024-11-05 11:44:05.773810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.774 [2024-11-05 11:44:05.893004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:08.157  [2024-11-05T11:44:08.005Z] Copying: 676/1024 [MB] (676 MBps) [2024-11-05T11:44:11.326Z] Copying: 1024/1024 [MB] (average 677 MBps) 00:28:12.052 00:28:12.052 11:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:12.052 11:44:10 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:13.965 Validate MD5 checksum, iteration 2 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=fcd470ead6d4a21d3f921bef22a78a4d 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ fcd470ead6d4a21d3f921bef22a78a4d != \f\c\d\4\7\0\e\a\d\6\d\4\a\2\1\d\3\f\9\2\1\b\e\f\2\2\a\7\8\a\4\d ]] 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:13.965 11:44:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:13.965 [2024-11-05 11:44:12.999729] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:28:13.965 [2024-11-05 11:44:12.999832] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80637 ] 00:28:13.965 [2024-11-05 11:44:13.149432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.965 [2024-11-05 11:44:13.225090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.877  [2024-11-05T11:44:15.151Z] Copying: 684/1024 [MB] (684 MBps) [2024-11-05T11:44:17.692Z] Copying: 1024/1024 [MB] (average 689 MBps) 00:28:18.418 00:28:18.418 11:44:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:18.418 11:44:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=dd572ae63df9e81c8c75f08b69b024a5 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ dd572ae63df9e81c8c75f08b69b024a5 != \d\d\5\7\2\a\e\6\3\d\f\9\e\8\1\c\8\c\7\5\f\0\8\b\6\9\b\0\2\4\a\5 ]] 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:20.323 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80522 ]] 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80522 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80522 ']' 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80522 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80522 00:28:20.586 killing process with pid 80522 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80522' 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80522 00:28:20.586 11:44:19 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80522 00:28:21.157 [2024-11-05 11:44:20.205668] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:21.157 [2024-11-05 11:44:20.216084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.216119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:21.157 [2024-11-05 11:44:20.216129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:21.157 [2024-11-05 11:44:20.216136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.216152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:21.157 [2024-11-05 11:44:20.218320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.218345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:21.157 [2024-11-05 11:44:20.218354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.158 ms 00:28:21.157 [2024-11-05 11:44:20.218364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.218542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.218550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:21.157 [2024-11-05 11:44:20.218556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.162 ms 00:28:21.157 [2024-11-05 11:44:20.218562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.219565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.219696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:21.157 [2024-11-05 11:44:20.219707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.992 ms 00:28:21.157 [2024-11-05 11:44:20.219713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.220601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.220616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:21.157 [2024-11-05 11:44:20.220624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.859 ms 00:28:21.157 [2024-11-05 11:44:20.220630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.227779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.227817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:21.157 [2024-11-05 11:44:20.227825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.123 ms 00:28:21.157 [2024-11-05 11:44:20.227832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.231988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.232013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:21.157 [2024-11-05 11:44:20.232021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.125 ms 00:28:21.157 [2024-11-05 11:44:20.232027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.232085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.232093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:21.157 [2024-11-05 11:44:20.232100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:28:21.157 [2024-11-05 11:44:20.232106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.239325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.239348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:21.157 [2024-11-05 11:44:20.239355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.206 ms 00:28:21.157 [2024-11-05 11:44:20.239361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.246536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.246638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:21.157 [2024-11-05 11:44:20.246650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.149 ms 00:28:21.157 [2024-11-05 11:44:20.246655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.253621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.253713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:21.157 [2024-11-05 11:44:20.253724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.942 ms 00:28:21.157 [2024-11-05 11:44:20.253730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.260709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.157 [2024-11-05 11:44:20.260811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:21.157 [2024-11-05 11:44:20.260822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.936 ms 00:28:21.157 [2024-11-05 11:44:20.260828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.157 [2024-11-05 11:44:20.260850] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:21.157 [2024-11-05 11:44:20.260864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.157 [2024-11-05 11:44:20.260872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.157 [2024-11-05 11:44:20.260878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:21.157 [2024-11-05 11:44:20.260884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.157 [2024-11-05 11:44:20.260947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.158 [2024-11-05 11:44:20.260952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.158 [2024-11-05 11:44:20.260958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.158 [2024-11-05 11:44:20.260963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.158 [2024-11-05 11:44:20.260970] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:21.158 [2024-11-05 11:44:20.260975] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 18392b83-52a5-494d-b471-a234a8c3b327 00:28:21.158 [2024-11-05 11:44:20.260981] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:21.158 [2024-11-05 11:44:20.260987] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:21.158 [2024-11-05 11:44:20.260992] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:21.158 [2024-11-05 11:44:20.260997] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:21.158 [2024-11-05 11:44:20.261002] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:21.158 [2024-11-05 11:44:20.261008] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:21.158 [2024-11-05 11:44:20.261013] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:21.158 [2024-11-05 11:44:20.261018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:21.158 [2024-11-05 11:44:20.261022] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:21.158 [2024-11-05 11:44:20.261028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.158 [2024-11-05 11:44:20.261034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:21.158 [2024-11-05 11:44:20.261042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.178 ms 00:28:21.158 [2024-11-05 11:44:20.261048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.270705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.158 [2024-11-05 11:44:20.270729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:21.158 [2024-11-05 11:44:20.270737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.644 ms 00:28:21.158 [2024-11-05 11:44:20.270742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.271023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:21.158 [2024-11-05 11:44:20.271038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:21.158 [2024-11-05 11:44:20.271045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.267 ms 00:28:21.158 [2024-11-05 11:44:20.271050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.304180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.304274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:21.158 [2024-11-05 11:44:20.304285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.304291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.304312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.304322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:21.158 [2024-11-05 11:44:20.304329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.304335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.304383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.304391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:21.158 [2024-11-05 11:44:20.304397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.304403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.304416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.304422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:21.158 [2024-11-05 11:44:20.304430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.304437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.364903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.364935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:21.158 [2024-11-05 11:44:20.364944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.364950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:21.158 [2024-11-05 11:44:20.413401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:21.158 [2024-11-05 11:44:20.413485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:21.158 [2024-11-05 11:44:20.413535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:21.158 [2024-11-05 11:44:20.413631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:21.158 [2024-11-05 11:44:20.413675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:21.158 [2024-11-05 11:44:20.413722] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:21.158 [2024-11-05 11:44:20.413767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:21.158 [2024-11-05 11:44:20.413774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:21.158 [2024-11-05 11:44:20.413781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:21.158 [2024-11-05 11:44:20.413883] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 197.779 ms, result 0 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:22.101 Remove shared memory files 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid80313 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:22.101 ************************************ 00:28:22.101 END TEST ftl_upgrade_shutdown 00:28:22.101 ************************************ 00:28:22.101 00:28:22.101 real 1m20.624s 00:28:22.101 user 1m52.260s 00:28:22.101 sys 0m18.001s 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:22.101 11:44:21 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:22.101 Process with pid 72148 is not found 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@14 -- # killprocess 72148 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@952 -- # '[' -z 72148 ']' 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@956 -- # kill -0 72148 00:28:22.101 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (72148) - No such process 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 72148 is not found' 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:22.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=80764 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@20 -- # waitforlisten 80764 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@833 -- # '[' -z 80764 ']' 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:22.101 11:44:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:22.101 11:44:21 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:22.101 [2024-11-05 11:44:21.175867] Starting SPDK v25.01-pre git sha1 1aeff8917 / DPDK 24.03.0 initialization... 00:28:22.101 [2024-11-05 11:44:21.175982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80764 ] 00:28:22.101 [2024-11-05 11:44:21.331313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.360 [2024-11-05 11:44:21.409835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.933 11:44:22 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:22.933 11:44:22 ftl -- common/autotest_common.sh@866 -- # return 0 00:28:22.933 11:44:22 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:23.194 nvme0n1 00:28:23.194 11:44:22 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:23.194 11:44:22 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:23.194 11:44:22 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:23.455 11:44:22 ftl -- ftl/common.sh@28 -- # stores=1ea019db-28f4-4b0c-9c62-53a1649f25ab 00:28:23.455 11:44:22 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:23.455 11:44:22 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1ea019db-28f4-4b0c-9c62-53a1649f25ab 00:28:23.455 11:44:22 ftl -- ftl/ftl.sh@23 -- # killprocess 80764 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@952 -- # '[' -z 80764 ']' 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@956 -- # kill -0 80764 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@957 -- # uname 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80764 00:28:23.455 killing process with pid 80764 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80764' 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@971 -- # kill 80764 00:28:23.455 11:44:22 ftl -- common/autotest_common.sh@976 -- # wait 80764 00:28:24.871 11:44:23 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:24.871 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:24.871 Waiting for block devices as requested 00:28:24.871 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:25.132 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:25.132 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:25.132 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:30.421 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:30.421 Remove shared memory files 00:28:30.421 11:44:29 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:30.421 11:44:29 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:30.421 11:44:29 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:30.421 11:44:29 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:30.421 11:44:29 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:30.421 11:44:29 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:30.421 11:44:29 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:30.421 ************************************ 00:28:30.421 END TEST ftl 00:28:30.421 ************************************ 00:28:30.421 00:28:30.421 real 12m35.119s 00:28:30.421 user 14m33.993s 00:28:30.421 sys 1m13.311s 00:28:30.421 11:44:29 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:30.421 11:44:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:30.421 11:44:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:30.421 11:44:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:30.421 11:44:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:30.421 11:44:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:30.421 11:44:29 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:30.421 11:44:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:30.421 11:44:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:30.421 11:44:29 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:30.421 11:44:29 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:30.421 11:44:29 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:30.421 11:44:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:30.421 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:28:30.421 11:44:29 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:30.421 11:44:29 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:28:30.421 11:44:29 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:28:30.421 11:44:29 -- common/autotest_common.sh@10 -- # set +x 00:28:31.363 INFO: APP EXITING 00:28:31.363 INFO: killing all VMs 00:28:31.363 INFO: killing vhost app 00:28:31.363 INFO: EXIT DONE 00:28:31.624 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:31.885 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:31.885 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:31.885 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:31.885 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:32.147 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:32.408 Cleaning 00:28:32.408 Removing: /var/run/dpdk/spdk0/config 00:28:32.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:32.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:32.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:32.408 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:32.408 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:32.408 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:32.408 Removing: /var/run/dpdk/spdk0 00:28:32.408 Removing: /var/run/dpdk/spdk_pid56880 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57082 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57295 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57392 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57427 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57555 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57570 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57767 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57862 00:28:32.408 Removing: /var/run/dpdk/spdk_pid57961 00:28:32.408 Removing: /var/run/dpdk/spdk_pid58072 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58169 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58209 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58245 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58316 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58394 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58825 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58878 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58941 00:28:32.409 Removing: /var/run/dpdk/spdk_pid58957 00:28:32.409 Removing: /var/run/dpdk/spdk_pid59058 00:28:32.409 Removing: /var/run/dpdk/spdk_pid59064 00:28:32.409 Removing: /var/run/dpdk/spdk_pid59166 00:28:32.409 Removing: /var/run/dpdk/spdk_pid59182 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59235 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59253 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59306 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59329 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59490 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59526 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59610 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59782 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59860 00:28:32.670 Removing: /var/run/dpdk/spdk_pid59897 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60330 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60423 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60528 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60581 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60607 00:28:32.670 Removing: /var/run/dpdk/spdk_pid60685 00:28:32.670 Removing: /var/run/dpdk/spdk_pid61310 00:28:32.670 Removing: /var/run/dpdk/spdk_pid61345 00:28:32.670 Removing: /var/run/dpdk/spdk_pid61805 00:28:32.670 Removing: /var/run/dpdk/spdk_pid61898 00:28:32.670 Removing: /var/run/dpdk/spdk_pid62007 00:28:32.670 Removing: /var/run/dpdk/spdk_pid62060 00:28:32.670 Removing: /var/run/dpdk/spdk_pid62080 00:28:32.670 Removing: /var/run/dpdk/spdk_pid62111 00:28:32.670 Removing: /var/run/dpdk/spdk_pid63945 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64086 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64090 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64102 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64147 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64151 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64163 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64208 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64212 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64224 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64269 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64273 00:28:32.670 Removing: /var/run/dpdk/spdk_pid64285 00:28:32.670 Removing: /var/run/dpdk/spdk_pid65645 00:28:32.670 Removing: /var/run/dpdk/spdk_pid65742 00:28:32.670 Removing: /var/run/dpdk/spdk_pid67144 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68539 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68621 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68704 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68780 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68879 00:28:32.670 Removing: /var/run/dpdk/spdk_pid68948 00:28:32.670 Removing: /var/run/dpdk/spdk_pid69090 00:28:32.670 Removing: /var/run/dpdk/spdk_pid69449 00:28:32.670 Removing: /var/run/dpdk/spdk_pid69480 00:28:32.670 Removing: /var/run/dpdk/spdk_pid69925 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70108 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70201 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70315 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70361 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70387 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70678 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70738 00:28:32.670 Removing: /var/run/dpdk/spdk_pid70807 00:28:32.670 Removing: /var/run/dpdk/spdk_pid71199 00:28:32.670 Removing: /var/run/dpdk/spdk_pid71338 00:28:32.670 Removing: /var/run/dpdk/spdk_pid72148 00:28:32.670 Removing: /var/run/dpdk/spdk_pid72286 00:28:32.670 Removing: /var/run/dpdk/spdk_pid72455 00:28:32.670 Removing: /var/run/dpdk/spdk_pid72547 00:28:32.670 Removing: /var/run/dpdk/spdk_pid72853 00:28:32.670 Removing: /var/run/dpdk/spdk_pid73105 00:28:32.670 Removing: /var/run/dpdk/spdk_pid73465 00:28:32.670 Removing: /var/run/dpdk/spdk_pid73647 00:28:32.670 Removing: /var/run/dpdk/spdk_pid73788 00:28:32.670 Removing: /var/run/dpdk/spdk_pid73835 00:28:32.670 Removing: /var/run/dpdk/spdk_pid74006 00:28:32.670 Removing: /var/run/dpdk/spdk_pid74031 00:28:32.670 Removing: /var/run/dpdk/spdk_pid74084 00:28:32.670 Removing: /var/run/dpdk/spdk_pid74328 00:28:32.670 Removing: /var/run/dpdk/spdk_pid74564 00:28:32.670 Removing: /var/run/dpdk/spdk_pid75143 00:28:32.670 Removing: /var/run/dpdk/spdk_pid75817 00:28:32.670 Removing: /var/run/dpdk/spdk_pid76357 00:28:32.670 Removing: /var/run/dpdk/spdk_pid77146 00:28:32.670 Removing: /var/run/dpdk/spdk_pid77289 00:28:32.670 Removing: /var/run/dpdk/spdk_pid77376 00:28:32.670 Removing: /var/run/dpdk/spdk_pid77722 00:28:32.670 Removing: /var/run/dpdk/spdk_pid77775 00:28:32.670 Removing: /var/run/dpdk/spdk_pid78399 00:28:32.670 Removing: /var/run/dpdk/spdk_pid78850 00:28:32.670 Removing: /var/run/dpdk/spdk_pid79773 00:28:32.670 Removing: /var/run/dpdk/spdk_pid79901 00:28:32.670 Removing: /var/run/dpdk/spdk_pid79949 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80002 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80060 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80119 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80313 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80393 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80461 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80522 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80559 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80637 00:28:32.670 Removing: /var/run/dpdk/spdk_pid80764 00:28:32.670 Clean 00:28:32.670 11:44:31 -- common/autotest_common.sh@1451 -- # return 0 00:28:32.670 11:44:31 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:32.670 11:44:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.670 11:44:31 -- common/autotest_common.sh@10 -- # set +x 00:28:32.932 11:44:31 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:32.932 11:44:31 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:32.932 11:44:31 -- common/autotest_common.sh@10 -- # set +x 00:28:32.932 11:44:31 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:32.932 11:44:31 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:32.932 11:44:31 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:32.932 11:44:31 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:32.932 11:44:31 -- spdk/autotest.sh@394 -- # hostname 00:28:32.932 11:44:31 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:32.932 geninfo: WARNING: invalid characters removed from testname! 00:28:59.508 11:44:55 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:00.080 11:44:59 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:02.627 11:45:01 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:04.012 11:45:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:05.930 11:45:04 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:07.845 11:45:06 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:10.392 11:45:09 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:10.392 11:45:09 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:10.392 11:45:09 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:10.392 11:45:09 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:10.392 11:45:09 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:10.392 11:45:09 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:10.392 + [[ -n 5024 ]] 00:29:10.392 + sudo kill 5024 00:29:10.402 [Pipeline] } 00:29:10.417 [Pipeline] // timeout 00:29:10.423 [Pipeline] } 00:29:10.436 [Pipeline] // stage 00:29:10.440 [Pipeline] } 00:29:10.453 [Pipeline] // catchError 00:29:10.461 [Pipeline] stage 00:29:10.463 [Pipeline] { (Stop VM) 00:29:10.475 [Pipeline] sh 00:29:10.758 + vagrant halt 00:29:13.312 ==> default: Halting domain... 00:29:19.914 [Pipeline] sh 00:29:20.206 + vagrant destroy -f 00:29:22.752 ==> default: Removing domain... 00:29:23.710 [Pipeline] sh 00:29:23.996 + mv output /var/jenkins/workspace/nvme-vg-autotest_3/output 00:29:24.007 [Pipeline] } 00:29:24.022 [Pipeline] // stage 00:29:24.027 [Pipeline] } 00:29:24.040 [Pipeline] // dir 00:29:24.045 [Pipeline] } 00:29:24.059 [Pipeline] // wrap 00:29:24.065 [Pipeline] } 00:29:24.078 [Pipeline] // catchError 00:29:24.087 [Pipeline] stage 00:29:24.089 [Pipeline] { (Epilogue) 00:29:24.102 [Pipeline] sh 00:29:24.388 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:29.682 [Pipeline] catchError 00:29:29.684 [Pipeline] { 00:29:29.697 [Pipeline] sh 00:29:29.984 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:29.984 Artifacts sizes are good 00:29:29.995 [Pipeline] } 00:29:30.009 [Pipeline] // catchError 00:29:30.019 [Pipeline] archiveArtifacts 00:29:30.026 Archiving artifacts 00:29:30.117 [Pipeline] cleanWs 00:29:30.129 [WS-CLEANUP] Deleting project workspace... 00:29:30.130 [WS-CLEANUP] Deferred wipeout is used... 00:29:30.137 [WS-CLEANUP] done 00:29:30.139 [Pipeline] } 00:29:30.153 [Pipeline] // stage 00:29:30.158 [Pipeline] } 00:29:30.172 [Pipeline] // node 00:29:30.177 [Pipeline] End of Pipeline 00:29:30.214 Finished: SUCCESS